00:00:00.000 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 1007 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3674 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.000 Started by timer 00:00:00.099 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.102 The recommended git tool is: git 00:00:00.102 using credential 00000000-0000-0000-0000-000000000002 00:00:00.107 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.138 Fetching changes from the remote Git repository 00:00:00.141 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.168 Using shallow fetch with depth 1 00:00:00.168 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.168 > git --version # timeout=10 00:00:00.192 > git --version # 'git version 2.39.2' 00:00:00.192 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.212 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.212 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.585 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.596 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.607 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.607 > git config core.sparsecheckout # timeout=10 00:00:06.618 > git read-tree -mu HEAD # timeout=10 00:00:06.634 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.656 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.656 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.749 [Pipeline] Start of Pipeline 00:00:06.758 [Pipeline] library 00:00:06.759 Loading library shm_lib@master 00:00:06.759 Library shm_lib@master is cached. Copying from home. 00:00:06.770 [Pipeline] node 00:00:06.780 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.781 [Pipeline] { 00:00:06.790 [Pipeline] catchError 00:00:06.791 [Pipeline] { 00:00:06.800 [Pipeline] wrap 00:00:06.806 [Pipeline] { 00:00:06.813 [Pipeline] stage 00:00:06.814 [Pipeline] { (Prologue) 00:00:06.827 [Pipeline] echo 00:00:06.828 Node: VM-host-SM38 00:00:06.832 [Pipeline] cleanWs 00:00:06.843 [WS-CLEANUP] Deleting project workspace... 00:00:06.843 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.852 [WS-CLEANUP] done 00:00:07.027 [Pipeline] setCustomBuildProperty 00:00:07.114 [Pipeline] httpRequest 00:00:07.636 [Pipeline] echo 00:00:07.638 Sorcerer 10.211.164.101 is alive 00:00:07.648 [Pipeline] retry 00:00:07.650 [Pipeline] { 00:00:07.664 [Pipeline] httpRequest 00:00:07.669 HttpMethod: GET 00:00:07.670 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.671 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.673 Response Code: HTTP/1.1 200 OK 00:00:07.674 Success: Status code 200 is in the accepted range: 200,404 00:00:07.674 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.689 [Pipeline] } 00:00:08.706 [Pipeline] // retry 00:00:08.714 [Pipeline] sh 00:00:09.000 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.017 [Pipeline] httpRequest 00:00:09.757 [Pipeline] echo 00:00:09.759 Sorcerer 10.211.164.101 is alive 00:00:09.769 [Pipeline] retry 00:00:09.771 [Pipeline] { 00:00:09.786 [Pipeline] httpRequest 00:00:09.791 HttpMethod: GET 00:00:09.792 URL: http://10.211.164.101/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:09.793 Sending request to url: http://10.211.164.101/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:09.805 Response Code: HTTP/1.1 200 OK 00:00:09.806 Success: Status code 200 is in the accepted range: 200,404 00:00:09.807 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:12.708 [Pipeline] } 00:01:12.726 [Pipeline] // retry 00:01:12.734 [Pipeline] sh 00:01:13.021 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:15.567 [Pipeline] sh 00:01:15.851 + git -C spdk log --oneline -n5 00:01:15.851 c13c99a5e test: Various fixes for Fedora40 00:01:15.851 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:15.851 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:15.852 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:15.852 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:15.871 [Pipeline] withCredentials 00:01:15.880 > git --version # timeout=10 00:01:15.891 > git --version # 'git version 2.39.2' 00:01:15.905 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:15.907 [Pipeline] { 00:01:15.914 [Pipeline] retry 00:01:15.916 [Pipeline] { 00:01:15.929 [Pipeline] sh 00:01:16.208 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:16.219 [Pipeline] } 00:01:16.236 [Pipeline] // retry 00:01:16.242 [Pipeline] } 00:01:16.258 [Pipeline] // withCredentials 00:01:16.271 [Pipeline] httpRequest 00:01:16.690 [Pipeline] echo 00:01:16.692 Sorcerer 10.211.164.101 is alive 00:01:16.701 [Pipeline] retry 00:01:16.703 [Pipeline] { 00:01:16.718 [Pipeline] httpRequest 00:01:16.723 HttpMethod: GET 00:01:16.724 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:16.724 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:16.727 Response Code: HTTP/1.1 200 OK 00:01:16.728 Success: Status code 200 is in the accepted range: 200,404 00:01:16.728 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:26.916 [Pipeline] } 00:01:26.933 [Pipeline] // retry 00:01:26.941 [Pipeline] sh 00:01:27.226 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:29.157 [Pipeline] sh 00:01:29.446 + git -C dpdk log --oneline -n5 00:01:29.446 caf0f5d395 version: 22.11.4 00:01:29.446 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:29.446 dc9c799c7d vhost: fix missing spinlock unlock 00:01:29.446 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:29.446 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:29.469 [Pipeline] writeFile 00:01:29.486 [Pipeline] sh 00:01:29.775 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:29.789 [Pipeline] sh 00:01:30.075 + cat autorun-spdk.conf 00:01:30.075 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.075 SPDK_TEST_NVME=1 00:01:30.075 SPDK_TEST_FTL=1 00:01:30.075 SPDK_TEST_ISAL=1 00:01:30.075 SPDK_RUN_ASAN=1 00:01:30.075 SPDK_RUN_UBSAN=1 00:01:30.075 SPDK_TEST_XNVME=1 00:01:30.075 SPDK_TEST_NVME_FDP=1 00:01:30.075 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:30.075 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:30.075 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:30.084 RUN_NIGHTLY=1 00:01:30.086 [Pipeline] } 00:01:30.101 [Pipeline] // stage 00:01:30.119 [Pipeline] stage 00:01:30.122 [Pipeline] { (Run VM) 00:01:30.137 [Pipeline] sh 00:01:30.419 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:30.419 + echo 'Start stage prepare_nvme.sh' 00:01:30.419 Start stage prepare_nvme.sh 00:01:30.419 + [[ -n 8 ]] 00:01:30.419 + disk_prefix=ex8 00:01:30.419 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:30.419 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:30.419 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:30.419 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.419 ++ SPDK_TEST_NVME=1 00:01:30.419 ++ SPDK_TEST_FTL=1 00:01:30.419 ++ SPDK_TEST_ISAL=1 00:01:30.419 ++ SPDK_RUN_ASAN=1 00:01:30.419 ++ SPDK_RUN_UBSAN=1 00:01:30.419 ++ SPDK_TEST_XNVME=1 00:01:30.419 ++ SPDK_TEST_NVME_FDP=1 00:01:30.419 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:30.419 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:30.419 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:30.419 ++ RUN_NIGHTLY=1 00:01:30.419 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:30.419 + nvme_files=() 00:01:30.419 + declare -A nvme_files 00:01:30.419 + backend_dir=/var/lib/libvirt/images/backends 00:01:30.419 + nvme_files['nvme.img']=5G 00:01:30.419 + nvme_files['nvme-cmb.img']=5G 00:01:30.419 + nvme_files['nvme-multi0.img']=4G 00:01:30.419 + nvme_files['nvme-multi1.img']=4G 00:01:30.419 + nvme_files['nvme-multi2.img']=4G 00:01:30.419 + nvme_files['nvme-openstack.img']=8G 00:01:30.419 + nvme_files['nvme-zns.img']=5G 00:01:30.419 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:30.419 + (( SPDK_TEST_FTL == 1 )) 00:01:30.419 + nvme_files["nvme-ftl.img"]=6G 00:01:30.419 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:30.419 + nvme_files["nvme-fdp.img"]=1G 00:01:30.419 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:30.419 + for nvme in "${!nvme_files[@]}" 00:01:30.419 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi2.img -s 4G 00:01:30.419 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:30.419 + for nvme in "${!nvme_files[@]}" 00:01:30.419 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-ftl.img -s 6G 00:01:30.681 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:30.681 + for nvme in "${!nvme_files[@]}" 00:01:30.681 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-cmb.img -s 5G 00:01:30.681 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:30.681 + for nvme in "${!nvme_files[@]}" 00:01:30.681 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-openstack.img -s 8G 00:01:30.681 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:30.681 + for nvme in "${!nvme_files[@]}" 00:01:30.681 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-zns.img -s 5G 00:01:30.941 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:30.941 + for nvme in "${!nvme_files[@]}" 00:01:30.941 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi1.img -s 4G 00:01:30.941 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:30.941 + for nvme in "${!nvme_files[@]}" 00:01:30.941 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi0.img -s 4G 00:01:30.941 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:30.941 + for nvme in "${!nvme_files[@]}" 00:01:30.941 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-fdp.img -s 1G 00:01:30.941 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:30.941 + for nvme in "${!nvme_files[@]}" 00:01:30.941 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme.img -s 5G 00:01:31.510 Formatting '/var/lib/libvirt/images/backends/ex8-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:31.510 ++ sudo grep -rl ex8-nvme.img /etc/libvirt/qemu 00:01:31.510 + echo 'End stage prepare_nvme.sh' 00:01:31.510 End stage prepare_nvme.sh 00:01:31.522 [Pipeline] sh 00:01:31.815 + DISTRO=fedora39 00:01:31.815 + CPUS=10 00:01:31.815 + RAM=12288 00:01:31.815 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:31.815 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex8-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex8-nvme.img -b /var/lib/libvirt/images/backends/ex8-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex8-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:31.815 00:01:31.815 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:31.815 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:31.815 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:31.815 HELP=0 00:01:31.815 DRY_RUN=0 00:01:31.815 NVME_FILE=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,/var/lib/libvirt/images/backends/ex8-nvme.img,/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,/var/lib/libvirt/images/backends/ex8-nvme-fdp.img, 00:01:31.815 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:31.815 NVME_AUTO_CREATE=0 00:01:31.815 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,, 00:01:31.815 NVME_CMB=,,,, 00:01:31.815 NVME_PMR=,,,, 00:01:31.815 NVME_ZNS=,,,, 00:01:31.815 NVME_MS=true,,,, 00:01:31.815 NVME_FDP=,,,on, 00:01:31.815 SPDK_VAGRANT_DISTRO=fedora39 00:01:31.815 SPDK_VAGRANT_VMCPU=10 00:01:31.815 SPDK_VAGRANT_VMRAM=12288 00:01:31.815 SPDK_VAGRANT_PROVIDER=libvirt 00:01:31.815 SPDK_VAGRANT_HTTP_PROXY= 00:01:31.815 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:31.815 SPDK_OPENSTACK_NETWORK=0 00:01:31.815 VAGRANT_PACKAGE_BOX=0 00:01:31.815 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:31.815 FORCE_DISTRO=true 00:01:31.815 VAGRANT_BOX_VERSION= 00:01:31.815 EXTRA_VAGRANTFILES= 00:01:31.815 NIC_MODEL=e1000 00:01:31.815 00:01:31.815 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:31.815 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:34.366 Bringing machine 'default' up with 'libvirt' provider... 00:01:34.625 ==> default: Creating image (snapshot of base box volume). 00:01:34.886 ==> default: Creating domain with the following settings... 00:01:34.886 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732775205_8b6d63146eed46b8213f 00:01:34.886 ==> default: -- Domain type: kvm 00:01:34.886 ==> default: -- Cpus: 10 00:01:34.886 ==> default: -- Feature: acpi 00:01:34.886 ==> default: -- Feature: apic 00:01:34.886 ==> default: -- Feature: pae 00:01:34.886 ==> default: -- Memory: 12288M 00:01:34.886 ==> default: -- Memory Backing: hugepages: 00:01:34.886 ==> default: -- Management MAC: 00:01:34.886 ==> default: -- Loader: 00:01:34.886 ==> default: -- Nvram: 00:01:34.886 ==> default: -- Base box: spdk/fedora39 00:01:34.886 ==> default: -- Storage pool: default 00:01:34.886 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732775205_8b6d63146eed46b8213f.img (20G) 00:01:34.886 ==> default: -- Volume Cache: default 00:01:34.886 ==> default: -- Kernel: 00:01:34.886 ==> default: -- Initrd: 00:01:34.886 ==> default: -- Graphics Type: vnc 00:01:34.886 ==> default: -- Graphics Port: -1 00:01:34.886 ==> default: -- Graphics IP: 127.0.0.1 00:01:34.886 ==> default: -- Graphics Password: Not defined 00:01:34.886 ==> default: -- Video Type: cirrus 00:01:34.886 ==> default: -- Video VRAM: 9216 00:01:34.886 ==> default: -- Sound Type: 00:01:34.886 ==> default: -- Keymap: en-us 00:01:34.886 ==> default: -- TPM Path: 00:01:34.886 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:34.886 ==> default: -- Command line args: 00:01:34.886 ==> default: -> value=-device, 00:01:34.886 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:34.886 ==> default: -> value=-drive, 00:01:34.886 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:34.886 ==> default: -> value=-device, 00:01:34.886 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:34.886 ==> default: -> value=-device, 00:01:34.886 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:34.886 ==> default: -> value=-drive, 00:01:34.886 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme.img,if=none,id=nvme-1-drive0, 00:01:34.886 ==> default: -> value=-device, 00:01:34.886 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.886 ==> default: -> value=-device, 00:01:34.886 ==> default: -> value=nvme,id=nvme-2,serial=12342, 00:01:34.886 ==> default: -> value=-drive, 00:01:34.886 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:34.886 ==> default: -> value=-device, 00:01:34.886 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.886 ==> default: -> value=-drive, 00:01:34.886 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:34.886 ==> default: -> value=-device, 00:01:34.886 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.886 ==> default: -> value=-drive, 00:01:34.886 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:34.886 ==> default: -> value=-device, 00:01:34.886 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:34.886 ==> default: -> value=-device, 00:01:34.886 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:34.886 ==> default: -> value=-device, 00:01:34.886 ==> default: -> value=nvme,id=nvme-3,serial=12343,subsys=fdp-subsys3, 00:01:34.886 ==> default: -> value=-drive, 00:01:34.886 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:34.886 ==> default: -> value=-device, 00:01:34.886 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:35.148 ==> default: Creating shared folders metadata... 00:01:35.148 ==> default: Starting domain. 00:01:37.063 ==> default: Waiting for domain to get an IP address... 00:01:59.014 ==> default: Waiting for SSH to become available... 00:01:59.014 ==> default: Configuring and enabling network interfaces... 00:02:00.942 default: SSH address: 192.168.121.12:22 00:02:00.942 default: SSH username: vagrant 00:02:00.942 default: SSH auth method: private key 00:02:02.858 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:09.448 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:13.657 ==> default: Mounting SSHFS shared folder... 00:02:15.031 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:15.031 ==> default: Checking Mount.. 00:02:15.963 ==> default: Folder Successfully Mounted! 00:02:15.963 00:02:15.963 SUCCESS! 00:02:15.963 00:02:15.963 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:15.963 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:15.963 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:15.963 00:02:15.969 [Pipeline] } 00:02:15.978 [Pipeline] // stage 00:02:15.984 [Pipeline] dir 00:02:15.985 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:15.986 [Pipeline] { 00:02:15.994 [Pipeline] catchError 00:02:15.996 [Pipeline] { 00:02:16.003 [Pipeline] sh 00:02:16.274 + vagrant ssh-config --host vagrant 00:02:16.274 + tee ssh_conf 00:02:16.274 + sed -ne '/^Host/,$p' 00:02:18.806 Host vagrant 00:02:18.806 HostName 192.168.121.12 00:02:18.806 User vagrant 00:02:18.806 Port 22 00:02:18.806 UserKnownHostsFile /dev/null 00:02:18.806 StrictHostKeyChecking no 00:02:18.806 PasswordAuthentication no 00:02:18.806 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:18.806 IdentitiesOnly yes 00:02:18.806 LogLevel FATAL 00:02:18.806 ForwardAgent yes 00:02:18.806 ForwardX11 yes 00:02:18.806 00:02:18.821 [Pipeline] withEnv 00:02:18.823 [Pipeline] { 00:02:18.837 [Pipeline] sh 00:02:19.120 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:19.120 source /etc/os-release 00:02:19.120 [[ -e /image.version ]] && img=$(< /image.version) 00:02:19.120 # Minimal, systemd-like check. 00:02:19.120 if [[ -e /.dockerenv ]]; then 00:02:19.120 # Clear garbage from the node'\''s name: 00:02:19.120 # agt-er_autotest_547-896 -> autotest_547-896 00:02:19.120 # $HOSTNAME is the actual container id 00:02:19.120 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:19.120 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:19.120 # We can assume this is a mount from a host where container is running, 00:02:19.120 # so fetch its hostname to easily identify the target swarm worker. 00:02:19.120 container="$(< /etc/hostname) ($agent)" 00:02:19.120 else 00:02:19.120 # Fallback 00:02:19.120 container=$agent 00:02:19.120 fi 00:02:19.120 fi 00:02:19.120 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:19.120 ' 00:02:19.134 [Pipeline] } 00:02:19.151 [Pipeline] // withEnv 00:02:19.161 [Pipeline] setCustomBuildProperty 00:02:19.176 [Pipeline] stage 00:02:19.178 [Pipeline] { (Tests) 00:02:19.196 [Pipeline] sh 00:02:19.481 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:19.497 [Pipeline] sh 00:02:19.782 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:20.060 [Pipeline] timeout 00:02:20.061 Timeout set to expire in 50 min 00:02:20.062 [Pipeline] { 00:02:20.077 [Pipeline] sh 00:02:20.361 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:20.621 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:20.635 [Pipeline] sh 00:02:20.918 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:20.933 [Pipeline] sh 00:02:21.214 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:21.493 [Pipeline] sh 00:02:21.778 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:22.039 ++ readlink -f spdk_repo 00:02:22.039 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:22.039 + [[ -n /home/vagrant/spdk_repo ]] 00:02:22.039 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:22.039 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:22.039 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:22.039 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:22.039 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:22.039 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:22.039 + cd /home/vagrant/spdk_repo 00:02:22.039 + source /etc/os-release 00:02:22.039 ++ NAME='Fedora Linux' 00:02:22.039 ++ VERSION='39 (Cloud Edition)' 00:02:22.039 ++ ID=fedora 00:02:22.039 ++ VERSION_ID=39 00:02:22.039 ++ VERSION_CODENAME= 00:02:22.039 ++ PLATFORM_ID=platform:f39 00:02:22.039 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:22.039 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:22.039 ++ LOGO=fedora-logo-icon 00:02:22.039 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:22.039 ++ HOME_URL=https://fedoraproject.org/ 00:02:22.039 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:22.039 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:22.039 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:22.039 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:22.039 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:22.039 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:22.039 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:22.039 ++ SUPPORT_END=2024-11-12 00:02:22.039 ++ VARIANT='Cloud Edition' 00:02:22.039 ++ VARIANT_ID=cloud 00:02:22.039 + uname -a 00:02:22.039 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:22.039 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:22.039 Hugepages 00:02:22.039 node hugesize free / total 00:02:22.039 node0 1048576kB 0 / 0 00:02:22.039 node0 2048kB 0 / 0 00:02:22.039 00:02:22.039 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:22.039 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:22.039 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:22.300 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:02:22.300 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:22.300 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:22.300 + rm -f /tmp/spdk-ld-path 00:02:22.300 + source autorun-spdk.conf 00:02:22.300 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:22.300 ++ SPDK_TEST_NVME=1 00:02:22.300 ++ SPDK_TEST_FTL=1 00:02:22.300 ++ SPDK_TEST_ISAL=1 00:02:22.300 ++ SPDK_RUN_ASAN=1 00:02:22.300 ++ SPDK_RUN_UBSAN=1 00:02:22.300 ++ SPDK_TEST_XNVME=1 00:02:22.300 ++ SPDK_TEST_NVME_FDP=1 00:02:22.300 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:22.300 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:22.300 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:22.300 ++ RUN_NIGHTLY=1 00:02:22.300 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:22.300 + [[ -n '' ]] 00:02:22.300 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:22.300 + for M in /var/spdk/build-*-manifest.txt 00:02:22.300 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:22.300 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:22.300 + for M in /var/spdk/build-*-manifest.txt 00:02:22.300 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:22.300 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:22.300 + for M in /var/spdk/build-*-manifest.txt 00:02:22.300 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:22.300 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:22.300 ++ uname 00:02:22.300 + [[ Linux == \L\i\n\u\x ]] 00:02:22.300 + sudo dmesg -T 00:02:22.300 + sudo dmesg --clear 00:02:22.300 + dmesg_pid=5717 00:02:22.300 + [[ Fedora Linux == FreeBSD ]] 00:02:22.300 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:22.300 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:22.300 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:22.300 + [[ -x /usr/src/fio-static/fio ]] 00:02:22.300 + sudo dmesg -Tw 00:02:22.300 + export FIO_BIN=/usr/src/fio-static/fio 00:02:22.300 + FIO_BIN=/usr/src/fio-static/fio 00:02:22.300 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:22.300 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:22.300 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:22.300 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:22.300 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:22.300 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:22.300 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:22.300 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:22.300 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:22.300 Test configuration: 00:02:22.300 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:22.300 SPDK_TEST_NVME=1 00:02:22.300 SPDK_TEST_FTL=1 00:02:22.300 SPDK_TEST_ISAL=1 00:02:22.300 SPDK_RUN_ASAN=1 00:02:22.300 SPDK_RUN_UBSAN=1 00:02:22.300 SPDK_TEST_XNVME=1 00:02:22.300 SPDK_TEST_NVME_FDP=1 00:02:22.300 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:22.300 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:22.300 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:22.300 RUN_NIGHTLY=1 06:27:32 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:22.300 06:27:32 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:22.300 06:27:33 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:22.300 06:27:33 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:22.301 06:27:33 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:22.301 06:27:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:22.301 06:27:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:22.301 06:27:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:22.301 06:27:33 -- paths/export.sh@5 -- $ export PATH 00:02:22.301 06:27:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:22.301 06:27:33 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:22.301 06:27:33 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:22.301 06:27:33 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732775253.XXXXXX 00:02:22.301 06:27:33 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732775253.0DC2Na 00:02:22.301 06:27:33 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:22.301 06:27:33 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:02:22.301 06:27:33 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:22.301 06:27:33 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:22.301 06:27:33 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:22.301 06:27:33 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:22.301 06:27:33 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:22.301 06:27:33 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:22.301 06:27:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:22.301 06:27:33 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:22.301 06:27:33 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:22.301 06:27:33 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:22.301 06:27:33 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:22.301 06:27:33 -- spdk/autobuild.sh@16 -- $ date -u 00:02:22.301 Thu Nov 28 06:27:33 AM UTC 2024 00:02:22.301 06:27:33 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:22.301 LTS-67-gc13c99a5e 00:02:22.301 06:27:33 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:22.301 06:27:33 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:22.301 06:27:33 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:22.301 06:27:33 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:22.301 06:27:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:22.301 ************************************ 00:02:22.301 START TEST asan 00:02:22.301 ************************************ 00:02:22.301 using asan 00:02:22.301 06:27:33 -- common/autotest_common.sh@1114 -- $ echo 'using asan' 00:02:22.301 00:02:22.301 real 0m0.000s 00:02:22.301 user 0m0.000s 00:02:22.301 sys 0m0.000s 00:02:22.301 06:27:33 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:22.301 06:27:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:22.301 ************************************ 00:02:22.301 END TEST asan 00:02:22.301 ************************************ 00:02:22.563 06:27:33 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:22.563 06:27:33 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:22.563 06:27:33 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:22.563 06:27:33 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:22.563 06:27:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:22.563 ************************************ 00:02:22.563 START TEST ubsan 00:02:22.563 ************************************ 00:02:22.563 using ubsan 00:02:22.563 06:27:33 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:22.563 00:02:22.563 real 0m0.000s 00:02:22.563 user 0m0.000s 00:02:22.563 sys 0m0.000s 00:02:22.563 06:27:33 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:22.563 06:27:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:22.563 ************************************ 00:02:22.563 END TEST ubsan 00:02:22.563 ************************************ 00:02:22.563 06:27:33 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:22.563 06:27:33 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:22.563 06:27:33 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:22.563 06:27:33 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:02:22.563 06:27:33 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:22.563 06:27:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:22.563 ************************************ 00:02:22.563 START TEST build_native_dpdk 00:02:22.563 ************************************ 00:02:22.563 06:27:33 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:02:22.563 06:27:33 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:22.563 06:27:33 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:22.563 06:27:33 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:22.563 06:27:33 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:22.563 06:27:33 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:22.563 06:27:33 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:22.563 06:27:33 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:22.563 06:27:33 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:22.563 06:27:33 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:22.563 06:27:33 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:22.563 06:27:33 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:22.563 06:27:33 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:22.563 06:27:33 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:22.563 06:27:33 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:22.563 06:27:33 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:22.563 06:27:33 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:22.563 06:27:33 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:22.563 06:27:33 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:22.563 06:27:33 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:22.563 06:27:33 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:22.563 caf0f5d395 version: 22.11.4 00:02:22.563 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:22.563 dc9c799c7d vhost: fix missing spinlock unlock 00:02:22.563 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:22.563 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:22.563 06:27:33 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:22.563 06:27:33 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:22.563 06:27:33 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:22.563 06:27:33 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:22.563 06:27:33 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:22.563 06:27:33 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:22.563 06:27:33 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:22.563 06:27:33 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:22.563 06:27:33 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:22.563 06:27:33 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:22.563 06:27:33 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:22.563 06:27:33 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:22.563 06:27:33 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:22.563 06:27:33 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:22.563 06:27:33 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:22.563 06:27:33 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:22.563 06:27:33 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:22.563 06:27:33 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:22.563 06:27:33 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:22.563 06:27:33 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:22.563 06:27:33 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:22.563 06:27:33 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:22.563 06:27:33 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:22.563 06:27:33 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:22.563 06:27:33 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:22.563 06:27:33 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:22.563 06:27:33 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:22.563 06:27:33 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:22.563 06:27:33 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:22.563 06:27:33 -- scripts/common.sh@343 -- $ case "$op" in 00:02:22.563 06:27:33 -- scripts/common.sh@344 -- $ : 1 00:02:22.563 06:27:33 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:22.563 06:27:33 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:22.563 06:27:33 -- scripts/common.sh@364 -- $ decimal 22 00:02:22.563 06:27:33 -- scripts/common.sh@352 -- $ local d=22 00:02:22.563 06:27:33 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:22.563 06:27:33 -- scripts/common.sh@354 -- $ echo 22 00:02:22.563 06:27:33 -- scripts/common.sh@364 -- $ ver1[v]=22 00:02:22.563 06:27:33 -- scripts/common.sh@365 -- $ decimal 21 00:02:22.563 06:27:33 -- scripts/common.sh@352 -- $ local d=21 00:02:22.563 06:27:33 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:22.563 06:27:33 -- scripts/common.sh@354 -- $ echo 21 00:02:22.563 06:27:33 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:22.563 06:27:33 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:22.563 06:27:33 -- scripts/common.sh@366 -- $ return 1 00:02:22.563 06:27:33 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:22.563 patching file config/rte_config.h 00:02:22.563 Hunk #1 succeeded at 60 (offset 1 line). 00:02:22.563 06:27:33 -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:22.563 06:27:33 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:22.563 06:27:33 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:22.563 06:27:33 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:22.563 06:27:33 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:22.563 06:27:33 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:22.563 06:27:33 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:22.563 06:27:33 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:22.563 06:27:33 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:22.563 06:27:33 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:22.563 06:27:33 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:22.563 06:27:33 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:22.563 06:27:33 -- scripts/common.sh@343 -- $ case "$op" in 00:02:22.563 06:27:33 -- scripts/common.sh@344 -- $ : 1 00:02:22.563 06:27:33 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:22.563 06:27:33 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:22.563 06:27:33 -- scripts/common.sh@364 -- $ decimal 22 00:02:22.563 06:27:33 -- scripts/common.sh@352 -- $ local d=22 00:02:22.563 06:27:33 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:22.563 06:27:33 -- scripts/common.sh@354 -- $ echo 22 00:02:22.563 06:27:33 -- scripts/common.sh@364 -- $ ver1[v]=22 00:02:22.563 06:27:33 -- scripts/common.sh@365 -- $ decimal 24 00:02:22.563 06:27:33 -- scripts/common.sh@352 -- $ local d=24 00:02:22.563 06:27:33 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:22.563 06:27:33 -- scripts/common.sh@354 -- $ echo 24 00:02:22.564 06:27:33 -- scripts/common.sh@365 -- $ ver2[v]=24 00:02:22.564 06:27:33 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:22.564 06:27:33 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:02:22.564 06:27:33 -- scripts/common.sh@367 -- $ return 0 00:02:22.564 06:27:33 -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:22.564 patching file lib/pcapng/rte_pcapng.c 00:02:22.564 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:22.564 06:27:33 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:22.564 06:27:33 -- common/autobuild_common.sh@181 -- $ uname -s 00:02:22.564 06:27:33 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:22.564 06:27:33 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:22.564 06:27:33 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:26.772 The Meson build system 00:02:26.772 Version: 1.5.0 00:02:26.772 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:26.772 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:26.772 Build type: native build 00:02:26.772 Program cat found: YES (/usr/bin/cat) 00:02:26.772 Project name: DPDK 00:02:26.772 Project version: 22.11.4 00:02:26.772 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:26.772 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:26.772 Host machine cpu family: x86_64 00:02:26.772 Host machine cpu: x86_64 00:02:26.772 Message: ## Building in Developer Mode ## 00:02:26.772 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:26.772 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:26.772 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:26.772 Program objdump found: YES (/usr/bin/objdump) 00:02:26.772 Program python3 found: YES (/usr/bin/python3) 00:02:26.772 Program cat found: YES (/usr/bin/cat) 00:02:26.772 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:26.772 Checking for size of "void *" : 8 00:02:26.772 Checking for size of "void *" : 8 (cached) 00:02:26.772 Library m found: YES 00:02:26.772 Library numa found: YES 00:02:26.772 Has header "numaif.h" : YES 00:02:26.772 Library fdt found: NO 00:02:26.772 Library execinfo found: NO 00:02:26.772 Has header "execinfo.h" : YES 00:02:26.772 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:26.772 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:26.772 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:26.772 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:26.772 Run-time dependency openssl found: YES 3.1.1 00:02:26.772 Run-time dependency libpcap found: YES 1.10.4 00:02:26.772 Has header "pcap.h" with dependency libpcap: YES 00:02:26.772 Compiler for C supports arguments -Wcast-qual: YES 00:02:26.772 Compiler for C supports arguments -Wdeprecated: YES 00:02:26.772 Compiler for C supports arguments -Wformat: YES 00:02:26.772 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:26.772 Compiler for C supports arguments -Wformat-security: NO 00:02:26.772 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:26.772 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:26.772 Compiler for C supports arguments -Wnested-externs: YES 00:02:26.772 Compiler for C supports arguments -Wold-style-definition: YES 00:02:26.772 Compiler for C supports arguments -Wpointer-arith: YES 00:02:26.773 Compiler for C supports arguments -Wsign-compare: YES 00:02:26.773 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:26.773 Compiler for C supports arguments -Wundef: YES 00:02:26.773 Compiler for C supports arguments -Wwrite-strings: YES 00:02:26.773 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:26.773 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:26.773 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:26.773 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:26.773 Compiler for C supports arguments -mavx512f: YES 00:02:26.773 Checking if "AVX512 checking" compiles: YES 00:02:26.773 Fetching value of define "__SSE4_2__" : 1 00:02:26.773 Fetching value of define "__AES__" : 1 00:02:26.773 Fetching value of define "__AVX__" : 1 00:02:26.773 Fetching value of define "__AVX2__" : 1 00:02:26.773 Fetching value of define "__AVX512BW__" : 1 00:02:26.773 Fetching value of define "__AVX512CD__" : 1 00:02:26.773 Fetching value of define "__AVX512DQ__" : 1 00:02:26.773 Fetching value of define "__AVX512F__" : 1 00:02:26.773 Fetching value of define "__AVX512VL__" : 1 00:02:26.773 Fetching value of define "__PCLMUL__" : 1 00:02:26.773 Fetching value of define "__RDRND__" : 1 00:02:26.773 Fetching value of define "__RDSEED__" : 1 00:02:26.773 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:26.773 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:26.773 Message: lib/kvargs: Defining dependency "kvargs" 00:02:26.773 Message: lib/telemetry: Defining dependency "telemetry" 00:02:26.773 Checking for function "getentropy" : YES 00:02:26.773 Message: lib/eal: Defining dependency "eal" 00:02:26.773 Message: lib/ring: Defining dependency "ring" 00:02:26.773 Message: lib/rcu: Defining dependency "rcu" 00:02:26.773 Message: lib/mempool: Defining dependency "mempool" 00:02:26.773 Message: lib/mbuf: Defining dependency "mbuf" 00:02:26.773 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:26.773 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:26.773 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:26.773 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:26.773 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:26.773 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:26.773 Compiler for C supports arguments -mpclmul: YES 00:02:26.773 Compiler for C supports arguments -maes: YES 00:02:26.773 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:26.773 Compiler for C supports arguments -mavx512bw: YES 00:02:26.773 Compiler for C supports arguments -mavx512dq: YES 00:02:26.773 Compiler for C supports arguments -mavx512vl: YES 00:02:26.773 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:26.773 Compiler for C supports arguments -mavx2: YES 00:02:26.773 Compiler for C supports arguments -mavx: YES 00:02:26.773 Message: lib/net: Defining dependency "net" 00:02:26.773 Message: lib/meter: Defining dependency "meter" 00:02:26.773 Message: lib/ethdev: Defining dependency "ethdev" 00:02:26.773 Message: lib/pci: Defining dependency "pci" 00:02:26.773 Message: lib/cmdline: Defining dependency "cmdline" 00:02:26.773 Message: lib/metrics: Defining dependency "metrics" 00:02:26.773 Message: lib/hash: Defining dependency "hash" 00:02:26.773 Message: lib/timer: Defining dependency "timer" 00:02:26.773 Fetching value of define "__AVX2__" : 1 (cached) 00:02:26.773 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:26.773 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:26.773 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:26.773 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:26.773 Message: lib/acl: Defining dependency "acl" 00:02:26.773 Message: lib/bbdev: Defining dependency "bbdev" 00:02:26.773 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:26.773 Run-time dependency libelf found: YES 0.191 00:02:26.773 Message: lib/bpf: Defining dependency "bpf" 00:02:26.773 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:26.773 Message: lib/compressdev: Defining dependency "compressdev" 00:02:26.773 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:26.773 Message: lib/distributor: Defining dependency "distributor" 00:02:26.773 Message: lib/efd: Defining dependency "efd" 00:02:26.773 Message: lib/eventdev: Defining dependency "eventdev" 00:02:26.773 Message: lib/gpudev: Defining dependency "gpudev" 00:02:26.773 Message: lib/gro: Defining dependency "gro" 00:02:26.773 Message: lib/gso: Defining dependency "gso" 00:02:26.773 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:26.773 Message: lib/jobstats: Defining dependency "jobstats" 00:02:26.773 Message: lib/latencystats: Defining dependency "latencystats" 00:02:26.773 Message: lib/lpm: Defining dependency "lpm" 00:02:26.773 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:26.773 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:26.773 Fetching value of define "__AVX512IFMA__" : 1 00:02:26.773 Message: lib/member: Defining dependency "member" 00:02:26.773 Message: lib/pcapng: Defining dependency "pcapng" 00:02:26.773 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:26.773 Message: lib/power: Defining dependency "power" 00:02:26.773 Message: lib/rawdev: Defining dependency "rawdev" 00:02:26.773 Message: lib/regexdev: Defining dependency "regexdev" 00:02:26.773 Message: lib/dmadev: Defining dependency "dmadev" 00:02:26.773 Message: lib/rib: Defining dependency "rib" 00:02:26.773 Message: lib/reorder: Defining dependency "reorder" 00:02:26.773 Message: lib/sched: Defining dependency "sched" 00:02:26.773 Message: lib/security: Defining dependency "security" 00:02:26.773 Message: lib/stack: Defining dependency "stack" 00:02:26.773 Has header "linux/userfaultfd.h" : YES 00:02:26.773 Message: lib/vhost: Defining dependency "vhost" 00:02:26.773 Message: lib/ipsec: Defining dependency "ipsec" 00:02:26.773 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:26.773 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:26.773 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:26.773 Message: lib/fib: Defining dependency "fib" 00:02:26.773 Message: lib/port: Defining dependency "port" 00:02:26.773 Message: lib/pdump: Defining dependency "pdump" 00:02:26.773 Message: lib/table: Defining dependency "table" 00:02:26.773 Message: lib/pipeline: Defining dependency "pipeline" 00:02:26.773 Message: lib/graph: Defining dependency "graph" 00:02:26.773 Message: lib/node: Defining dependency "node" 00:02:26.773 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:26.773 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:26.773 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:26.773 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:26.773 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:26.773 Compiler for C supports arguments -Wno-unused-value: YES 00:02:26.773 Compiler for C supports arguments -Wno-format: YES 00:02:26.773 Compiler for C supports arguments -Wno-format-security: YES 00:02:26.773 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:26.773 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:26.774 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:26.774 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:28.160 Fetching value of define "__AVX2__" : 1 (cached) 00:02:28.160 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:28.160 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:28.160 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:28.160 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:28.160 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:28.160 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:28.160 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:28.160 Configuring doxy-api.conf using configuration 00:02:28.160 Program sphinx-build found: NO 00:02:28.160 Configuring rte_build_config.h using configuration 00:02:28.160 Message: 00:02:28.160 ================= 00:02:28.160 Applications Enabled 00:02:28.160 ================= 00:02:28.160 00:02:28.160 apps: 00:02:28.160 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:28.160 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:28.160 test-security-perf, 00:02:28.160 00:02:28.160 Message: 00:02:28.160 ================= 00:02:28.160 Libraries Enabled 00:02:28.160 ================= 00:02:28.160 00:02:28.160 libs: 00:02:28.160 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:28.160 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:28.160 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:28.160 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:28.160 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:28.160 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:28.160 table, pipeline, graph, node, 00:02:28.160 00:02:28.160 Message: 00:02:28.160 =============== 00:02:28.160 Drivers Enabled 00:02:28.160 =============== 00:02:28.160 00:02:28.160 common: 00:02:28.160 00:02:28.160 bus: 00:02:28.160 pci, vdev, 00:02:28.160 mempool: 00:02:28.160 ring, 00:02:28.160 dma: 00:02:28.160 00:02:28.160 net: 00:02:28.160 i40e, 00:02:28.160 raw: 00:02:28.160 00:02:28.160 crypto: 00:02:28.160 00:02:28.160 compress: 00:02:28.160 00:02:28.160 regex: 00:02:28.160 00:02:28.160 vdpa: 00:02:28.160 00:02:28.160 event: 00:02:28.160 00:02:28.160 baseband: 00:02:28.160 00:02:28.160 gpu: 00:02:28.160 00:02:28.160 00:02:28.160 Message: 00:02:28.160 ================= 00:02:28.160 Content Skipped 00:02:28.160 ================= 00:02:28.160 00:02:28.160 apps: 00:02:28.160 00:02:28.160 libs: 00:02:28.160 kni: explicitly disabled via build config (deprecated lib) 00:02:28.160 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:28.160 00:02:28.160 drivers: 00:02:28.160 common/cpt: not in enabled drivers build config 00:02:28.160 common/dpaax: not in enabled drivers build config 00:02:28.160 common/iavf: not in enabled drivers build config 00:02:28.160 common/idpf: not in enabled drivers build config 00:02:28.160 common/mvep: not in enabled drivers build config 00:02:28.160 common/octeontx: not in enabled drivers build config 00:02:28.160 bus/auxiliary: not in enabled drivers build config 00:02:28.160 bus/dpaa: not in enabled drivers build config 00:02:28.160 bus/fslmc: not in enabled drivers build config 00:02:28.160 bus/ifpga: not in enabled drivers build config 00:02:28.160 bus/vmbus: not in enabled drivers build config 00:02:28.160 common/cnxk: not in enabled drivers build config 00:02:28.160 common/mlx5: not in enabled drivers build config 00:02:28.160 common/qat: not in enabled drivers build config 00:02:28.160 common/sfc_efx: not in enabled drivers build config 00:02:28.160 mempool/bucket: not in enabled drivers build config 00:02:28.160 mempool/cnxk: not in enabled drivers build config 00:02:28.160 mempool/dpaa: not in enabled drivers build config 00:02:28.160 mempool/dpaa2: not in enabled drivers build config 00:02:28.160 mempool/octeontx: not in enabled drivers build config 00:02:28.160 mempool/stack: not in enabled drivers build config 00:02:28.160 dma/cnxk: not in enabled drivers build config 00:02:28.160 dma/dpaa: not in enabled drivers build config 00:02:28.160 dma/dpaa2: not in enabled drivers build config 00:02:28.160 dma/hisilicon: not in enabled drivers build config 00:02:28.160 dma/idxd: not in enabled drivers build config 00:02:28.160 dma/ioat: not in enabled drivers build config 00:02:28.160 dma/skeleton: not in enabled drivers build config 00:02:28.160 net/af_packet: not in enabled drivers build config 00:02:28.160 net/af_xdp: not in enabled drivers build config 00:02:28.160 net/ark: not in enabled drivers build config 00:02:28.160 net/atlantic: not in enabled drivers build config 00:02:28.160 net/avp: not in enabled drivers build config 00:02:28.160 net/axgbe: not in enabled drivers build config 00:02:28.160 net/bnx2x: not in enabled drivers build config 00:02:28.160 net/bnxt: not in enabled drivers build config 00:02:28.160 net/bonding: not in enabled drivers build config 00:02:28.160 net/cnxk: not in enabled drivers build config 00:02:28.160 net/cxgbe: not in enabled drivers build config 00:02:28.160 net/dpaa: not in enabled drivers build config 00:02:28.160 net/dpaa2: not in enabled drivers build config 00:02:28.160 net/e1000: not in enabled drivers build config 00:02:28.160 net/ena: not in enabled drivers build config 00:02:28.160 net/enetc: not in enabled drivers build config 00:02:28.160 net/enetfec: not in enabled drivers build config 00:02:28.160 net/enic: not in enabled drivers build config 00:02:28.160 net/failsafe: not in enabled drivers build config 00:02:28.160 net/fm10k: not in enabled drivers build config 00:02:28.160 net/gve: not in enabled drivers build config 00:02:28.160 net/hinic: not in enabled drivers build config 00:02:28.160 net/hns3: not in enabled drivers build config 00:02:28.160 net/iavf: not in enabled drivers build config 00:02:28.160 net/ice: not in enabled drivers build config 00:02:28.160 net/idpf: not in enabled drivers build config 00:02:28.160 net/igc: not in enabled drivers build config 00:02:28.160 net/ionic: not in enabled drivers build config 00:02:28.160 net/ipn3ke: not in enabled drivers build config 00:02:28.160 net/ixgbe: not in enabled drivers build config 00:02:28.160 net/kni: not in enabled drivers build config 00:02:28.160 net/liquidio: not in enabled drivers build config 00:02:28.160 net/mana: not in enabled drivers build config 00:02:28.160 net/memif: not in enabled drivers build config 00:02:28.160 net/mlx4: not in enabled drivers build config 00:02:28.160 net/mlx5: not in enabled drivers build config 00:02:28.160 net/mvneta: not in enabled drivers build config 00:02:28.160 net/mvpp2: not in enabled drivers build config 00:02:28.160 net/netvsc: not in enabled drivers build config 00:02:28.160 net/nfb: not in enabled drivers build config 00:02:28.160 net/nfp: not in enabled drivers build config 00:02:28.160 net/ngbe: not in enabled drivers build config 00:02:28.160 net/null: not in enabled drivers build config 00:02:28.161 net/octeontx: not in enabled drivers build config 00:02:28.161 net/octeon_ep: not in enabled drivers build config 00:02:28.161 net/pcap: not in enabled drivers build config 00:02:28.161 net/pfe: not in enabled drivers build config 00:02:28.161 net/qede: not in enabled drivers build config 00:02:28.161 net/ring: not in enabled drivers build config 00:02:28.161 net/sfc: not in enabled drivers build config 00:02:28.161 net/softnic: not in enabled drivers build config 00:02:28.161 net/tap: not in enabled drivers build config 00:02:28.161 net/thunderx: not in enabled drivers build config 00:02:28.161 net/txgbe: not in enabled drivers build config 00:02:28.161 net/vdev_netvsc: not in enabled drivers build config 00:02:28.161 net/vhost: not in enabled drivers build config 00:02:28.161 net/virtio: not in enabled drivers build config 00:02:28.161 net/vmxnet3: not in enabled drivers build config 00:02:28.161 raw/cnxk_bphy: not in enabled drivers build config 00:02:28.161 raw/cnxk_gpio: not in enabled drivers build config 00:02:28.161 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:28.161 raw/ifpga: not in enabled drivers build config 00:02:28.161 raw/ntb: not in enabled drivers build config 00:02:28.161 raw/skeleton: not in enabled drivers build config 00:02:28.161 crypto/armv8: not in enabled drivers build config 00:02:28.161 crypto/bcmfs: not in enabled drivers build config 00:02:28.161 crypto/caam_jr: not in enabled drivers build config 00:02:28.161 crypto/ccp: not in enabled drivers build config 00:02:28.161 crypto/cnxk: not in enabled drivers build config 00:02:28.161 crypto/dpaa_sec: not in enabled drivers build config 00:02:28.161 crypto/dpaa2_sec: not in enabled drivers build config 00:02:28.161 crypto/ipsec_mb: not in enabled drivers build config 00:02:28.161 crypto/mlx5: not in enabled drivers build config 00:02:28.161 crypto/mvsam: not in enabled drivers build config 00:02:28.161 crypto/nitrox: not in enabled drivers build config 00:02:28.161 crypto/null: not in enabled drivers build config 00:02:28.161 crypto/octeontx: not in enabled drivers build config 00:02:28.161 crypto/openssl: not in enabled drivers build config 00:02:28.161 crypto/scheduler: not in enabled drivers build config 00:02:28.161 crypto/uadk: not in enabled drivers build config 00:02:28.161 crypto/virtio: not in enabled drivers build config 00:02:28.161 compress/isal: not in enabled drivers build config 00:02:28.161 compress/mlx5: not in enabled drivers build config 00:02:28.161 compress/octeontx: not in enabled drivers build config 00:02:28.161 compress/zlib: not in enabled drivers build config 00:02:28.161 regex/mlx5: not in enabled drivers build config 00:02:28.161 regex/cn9k: not in enabled drivers build config 00:02:28.161 vdpa/ifc: not in enabled drivers build config 00:02:28.161 vdpa/mlx5: not in enabled drivers build config 00:02:28.161 vdpa/sfc: not in enabled drivers build config 00:02:28.161 event/cnxk: not in enabled drivers build config 00:02:28.161 event/dlb2: not in enabled drivers build config 00:02:28.161 event/dpaa: not in enabled drivers build config 00:02:28.161 event/dpaa2: not in enabled drivers build config 00:02:28.161 event/dsw: not in enabled drivers build config 00:02:28.161 event/opdl: not in enabled drivers build config 00:02:28.161 event/skeleton: not in enabled drivers build config 00:02:28.161 event/sw: not in enabled drivers build config 00:02:28.161 event/octeontx: not in enabled drivers build config 00:02:28.161 baseband/acc: not in enabled drivers build config 00:02:28.161 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:28.161 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:28.161 baseband/la12xx: not in enabled drivers build config 00:02:28.161 baseband/null: not in enabled drivers build config 00:02:28.161 baseband/turbo_sw: not in enabled drivers build config 00:02:28.161 gpu/cuda: not in enabled drivers build config 00:02:28.161 00:02:28.161 00:02:28.161 Build targets in project: 309 00:02:28.161 00:02:28.161 DPDK 22.11.4 00:02:28.161 00:02:28.161 User defined options 00:02:28.161 libdir : lib 00:02:28.161 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:28.161 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:28.161 c_link_args : 00:02:28.161 enable_docs : false 00:02:28.161 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:28.161 enable_kmods : false 00:02:28.161 machine : native 00:02:28.161 tests : false 00:02:28.161 00:02:28.161 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:28.161 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:28.161 06:27:38 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:28.434 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:28.434 [1/738] Generating lib/rte_kvargs_def with a custom command 00:02:28.434 [2/738] Generating lib/rte_telemetry_def with a custom command 00:02:28.434 [3/738] Generating lib/rte_kvargs_mingw with a custom command 00:02:28.434 [4/738] Generating lib/rte_telemetry_mingw with a custom command 00:02:28.434 [5/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:28.434 [6/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:28.434 [7/738] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:28.434 [8/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:28.434 [9/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:28.434 [10/738] Linking static target lib/librte_kvargs.a 00:02:28.434 [11/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:28.434 [12/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:28.434 [13/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:28.434 [14/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:28.700 [15/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:28.700 [16/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:28.700 [17/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:28.700 [18/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:28.700 [19/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:28.700 [20/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:28.700 [21/738] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.700 [22/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:28.700 [23/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:28.700 [24/738] Linking target lib/librte_kvargs.so.23.0 00:02:28.700 [25/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:28.700 [26/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:28.700 [27/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:28.959 [28/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:28.959 [29/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:28.959 [30/738] Linking static target lib/librte_telemetry.a 00:02:28.959 [31/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:28.959 [32/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:28.959 [33/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:28.959 [34/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:28.959 [35/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:28.959 [36/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:28.959 [37/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:28.959 [38/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:28.959 [39/738] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:28.959 [40/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:28.959 [41/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:29.218 [42/738] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.218 [43/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:29.218 [44/738] Linking target lib/librte_telemetry.so.23.0 00:02:29.218 [45/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:29.218 [46/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:29.218 [47/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:29.218 [48/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:29.218 [49/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:29.218 [50/738] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:29.218 [51/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:29.218 [52/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:29.218 [53/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:29.477 [54/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:29.477 [55/738] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:29.477 [56/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:29.477 [57/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:29.477 [58/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:29.477 [59/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:29.477 [60/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:29.477 [61/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:29.477 [62/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:29.477 [63/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:29.477 [64/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:29.477 [65/738] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:29.477 [66/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:29.477 [67/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:29.477 [68/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:29.477 [69/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:29.477 [70/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:29.477 [71/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:29.477 [72/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:29.736 [73/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:29.736 [74/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:29.736 [75/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:29.736 [76/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:29.736 [77/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:29.736 [78/738] Generating lib/rte_eal_mingw with a custom command 00:02:29.736 [79/738] Generating lib/rte_eal_def with a custom command 00:02:29.736 [80/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:29.736 [81/738] Generating lib/rte_ring_def with a custom command 00:02:29.736 [82/738] Generating lib/rte_ring_mingw with a custom command 00:02:29.736 [83/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:29.736 [84/738] Generating lib/rte_rcu_mingw with a custom command 00:02:29.736 [85/738] Generating lib/rte_rcu_def with a custom command 00:02:29.736 [86/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:29.736 [87/738] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:29.736 [88/738] Linking static target lib/librte_ring.a 00:02:29.736 [89/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:29.736 [90/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:29.995 [91/738] Generating lib/rte_mempool_mingw with a custom command 00:02:29.995 [92/738] Generating lib/rte_mempool_def with a custom command 00:02:29.995 [93/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:29.995 [94/738] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.995 [95/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:29.995 [96/738] Generating lib/rte_mbuf_def with a custom command 00:02:29.995 [97/738] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:29.995 [98/738] Generating lib/rte_mbuf_mingw with a custom command 00:02:29.995 [99/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:29.995 [100/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:30.253 [101/738] Linking static target lib/librte_eal.a 00:02:30.253 [102/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:30.253 [103/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:30.253 [104/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:30.253 [105/738] Linking static target lib/librte_mempool.a 00:02:30.511 [106/738] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:30.511 [107/738] Linking static target lib/librte_rcu.a 00:02:30.511 [108/738] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:30.511 [109/738] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:30.511 [110/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:30.511 [111/738] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:30.511 [112/738] Generating lib/rte_net_mingw with a custom command 00:02:30.511 [113/738] Generating lib/rte_net_def with a custom command 00:02:30.511 [114/738] Generating lib/rte_meter_def with a custom command 00:02:30.511 [115/738] Generating lib/rte_meter_mingw with a custom command 00:02:30.511 [116/738] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:30.511 [117/738] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:30.511 [118/738] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:30.511 [119/738] Linking static target lib/librte_meter.a 00:02:30.511 [120/738] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.770 [121/738] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.770 [122/738] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:30.770 [123/738] Linking static target lib/librte_net.a 00:02:30.770 [124/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:30.770 [125/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:30.770 [126/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:31.028 [127/738] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.028 [128/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:31.028 [129/738] Linking static target lib/librte_mbuf.a 00:02:31.028 [130/738] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.028 [131/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:31.028 [132/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:31.286 [133/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:31.287 [134/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:31.287 [135/738] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.287 [136/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:31.287 [137/738] Generating lib/rte_ethdev_def with a custom command 00:02:31.287 [138/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:31.287 [139/738] Generating lib/rte_ethdev_mingw with a custom command 00:02:31.287 [140/738] Generating lib/rte_pci_mingw with a custom command 00:02:31.287 [141/738] Generating lib/rte_pci_def with a custom command 00:02:31.287 [142/738] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:31.287 [143/738] Linking static target lib/librte_pci.a 00:02:31.545 [144/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:31.545 [145/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:31.545 [146/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:31.545 [147/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:31.545 [148/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:31.545 [149/738] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.545 [150/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:31.545 [151/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:31.545 [152/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:31.545 [153/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:31.545 [154/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:31.803 [155/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:31.803 [156/738] Generating lib/rte_cmdline_def with a custom command 00:02:31.803 [157/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:31.803 [158/738] Generating lib/rte_cmdline_mingw with a custom command 00:02:31.803 [159/738] Generating lib/rte_metrics_def with a custom command 00:02:31.803 [160/738] Generating lib/rte_metrics_mingw with a custom command 00:02:31.803 [161/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:31.803 [162/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:31.803 [163/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:31.803 [164/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:31.803 [165/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:31.803 [166/738] Linking static target lib/librte_cmdline.a 00:02:31.803 [167/738] Generating lib/rte_hash_def with a custom command 00:02:31.803 [168/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:31.803 [169/738] Generating lib/rte_hash_mingw with a custom command 00:02:31.803 [170/738] Generating lib/rte_timer_def with a custom command 00:02:31.803 [171/738] Generating lib/rte_timer_mingw with a custom command 00:02:31.803 [172/738] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:32.062 [173/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:32.062 [174/738] Linking static target lib/librte_metrics.a 00:02:32.062 [175/738] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:32.062 [176/738] Linking static target lib/librte_timer.a 00:02:32.320 [177/738] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.320 [178/738] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.320 [179/738] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:32.320 [180/738] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.320 [181/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:32.578 [182/738] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:32.578 [183/738] Generating lib/rte_acl_def with a custom command 00:02:32.578 [184/738] Generating lib/rte_acl_mingw with a custom command 00:02:32.578 [185/738] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:32.578 [186/738] Generating lib/rte_bbdev_def with a custom command 00:02:32.578 [187/738] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:32.578 [188/738] Generating lib/rte_bbdev_mingw with a custom command 00:02:32.578 [189/738] Generating lib/rte_bitratestats_def with a custom command 00:02:32.578 [190/738] Generating lib/rte_bitratestats_mingw with a custom command 00:02:32.836 [191/738] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:32.836 [192/738] Linking static target lib/librte_bitratestats.a 00:02:32.836 [193/738] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:32.836 [194/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:33.093 [195/738] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.093 [196/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:33.093 [197/738] Linking static target lib/librte_ethdev.a 00:02:33.093 [198/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:33.093 [199/738] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:33.093 [200/738] Linking static target lib/librte_bbdev.a 00:02:33.352 [201/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:33.352 [202/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:33.610 [203/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:33.610 [204/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:33.610 [205/738] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.610 [206/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:33.869 [207/738] Generating lib/rte_bpf_def with a custom command 00:02:33.869 [208/738] Generating lib/rte_bpf_mingw with a custom command 00:02:33.869 [209/738] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:33.869 [210/738] Linking static target lib/librte_hash.a 00:02:33.869 [211/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:33.869 [212/738] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:33.869 [213/738] Linking static target lib/librte_cfgfile.a 00:02:33.869 [214/738] Generating lib/rte_cfgfile_def with a custom command 00:02:34.127 [215/738] Generating lib/rte_cfgfile_mingw with a custom command 00:02:34.127 [216/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:34.127 [217/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:34.127 [218/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:34.127 [219/738] Generating lib/rte_compressdev_def with a custom command 00:02:34.127 [220/738] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.127 [221/738] Generating lib/rte_compressdev_mingw with a custom command 00:02:34.385 [222/738] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.385 [223/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:34.385 [224/738] Linking static target lib/librte_bpf.a 00:02:34.385 [225/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:34.385 [226/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:34.385 [227/738] Generating lib/rte_cryptodev_def with a custom command 00:02:34.385 [228/738] Generating lib/rte_cryptodev_mingw with a custom command 00:02:34.385 [229/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:34.642 [230/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:34.642 [231/738] Linking static target lib/librte_compressdev.a 00:02:34.642 [232/738] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.642 [233/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:34.642 [234/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:34.642 [235/738] Generating lib/rte_distributor_def with a custom command 00:02:34.642 [236/738] Generating lib/rte_distributor_mingw with a custom command 00:02:34.642 [237/738] Linking static target lib/librte_acl.a 00:02:34.642 [238/738] Generating lib/rte_efd_def with a custom command 00:02:34.642 [239/738] Generating lib/rte_efd_mingw with a custom command 00:02:34.642 [240/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:34.899 [241/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:34.899 [242/738] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.899 [243/738] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.899 [244/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:34.899 [245/738] Linking target lib/librte_eal.so.23.0 00:02:35.158 [246/738] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:35.158 [247/738] Linking target lib/librte_ring.so.23.0 00:02:35.158 [248/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:35.158 [249/738] Linking target lib/librte_meter.so.23.0 00:02:35.158 [250/738] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.158 [251/738] Linking target lib/librte_pci.so.23.0 00:02:35.158 [252/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:35.158 [253/738] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:35.158 [254/738] Linking target lib/librte_timer.so.23.0 00:02:35.158 [255/738] Linking target lib/librte_rcu.so.23.0 00:02:35.158 [256/738] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:35.158 [257/738] Linking target lib/librte_mempool.so.23.0 00:02:35.158 [258/738] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:35.158 [259/738] Linking target lib/librte_acl.so.23.0 00:02:35.417 [260/738] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:35.417 [261/738] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:35.417 [262/738] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:35.417 [263/738] Linking static target lib/librte_distributor.a 00:02:35.417 [264/738] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:35.417 [265/738] Linking target lib/librte_cfgfile.so.23.0 00:02:35.417 [266/738] Linking target lib/librte_mbuf.so.23.0 00:02:35.417 [267/738] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:35.417 [268/738] Linking target lib/librte_net.so.23.0 00:02:35.417 [269/738] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:35.417 [270/738] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.417 [271/738] Linking target lib/librte_bbdev.so.23.0 00:02:35.417 [272/738] Linking target lib/librte_compressdev.so.23.0 00:02:35.676 [273/738] Linking static target lib/librte_efd.a 00:02:35.676 [274/738] Linking target lib/librte_distributor.so.23.0 00:02:35.676 [275/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:35.676 [276/738] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:35.676 [277/738] Generating lib/rte_eventdev_def with a custom command 00:02:35.676 [278/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:35.676 [279/738] Linking target lib/librte_cmdline.so.23.0 00:02:35.676 [280/738] Linking target lib/librte_hash.so.23.0 00:02:35.676 [281/738] Generating lib/rte_eventdev_mingw with a custom command 00:02:35.676 [282/738] Generating lib/rte_gpudev_def with a custom command 00:02:35.676 [283/738] Generating lib/rte_gpudev_mingw with a custom command 00:02:35.676 [284/738] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:35.676 [285/738] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.676 [286/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:35.676 [287/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:35.676 [288/738] Linking static target lib/librte_cryptodev.a 00:02:35.676 [289/738] Linking target lib/librte_efd.so.23.0 00:02:36.243 [290/738] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:36.243 [291/738] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:36.243 [292/738] Linking static target lib/librte_gpudev.a 00:02:36.243 [293/738] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:36.243 [294/738] Generating lib/rte_gro_def with a custom command 00:02:36.243 [295/738] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:36.243 [296/738] Generating lib/rte_gro_mingw with a custom command 00:02:36.243 [297/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:36.243 [298/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:36.243 [299/738] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.243 [300/738] Linking target lib/librte_ethdev.so.23.0 00:02:36.502 [301/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:36.502 [302/738] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:36.502 [303/738] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:36.502 [304/738] Linking target lib/librte_metrics.so.23.0 00:02:36.502 [305/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:36.502 [306/738] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:36.502 [307/738] Linking static target lib/librte_gro.a 00:02:36.502 [308/738] Linking target lib/librte_bpf.so.23.0 00:02:36.502 [309/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:36.502 [310/738] Linking static target lib/librte_eventdev.a 00:02:36.502 [311/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:36.502 [312/738] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:36.502 [313/738] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:36.502 [314/738] Generating lib/rte_gso_mingw with a custom command 00:02:36.502 [315/738] Generating lib/rte_gso_def with a custom command 00:02:36.502 [316/738] Linking target lib/librte_bitratestats.so.23.0 00:02:36.502 [317/738] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.502 [318/738] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:36.761 [319/738] Linking target lib/librte_gpudev.so.23.0 00:02:36.761 [320/738] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.761 [321/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:36.761 [322/738] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:36.762 [323/738] Linking target lib/librte_gro.so.23.0 00:02:36.762 [324/738] Linking static target lib/librte_gso.a 00:02:36.762 [325/738] Generating lib/rte_ip_frag_def with a custom command 00:02:36.762 [326/738] Generating lib/rte_ip_frag_mingw with a custom command 00:02:36.762 [327/738] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.762 [328/738] Linking target lib/librte_gso.so.23.0 00:02:36.762 [329/738] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:36.762 [330/738] Linking static target lib/librte_jobstats.a 00:02:36.762 [331/738] Generating lib/rte_jobstats_def with a custom command 00:02:36.762 [332/738] Generating lib/rte_jobstats_mingw with a custom command 00:02:36.762 [333/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:37.020 [334/738] Generating lib/rte_latencystats_def with a custom command 00:02:37.020 [335/738] Generating lib/rte_latencystats_mingw with a custom command 00:02:37.020 [336/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:37.020 [337/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:37.020 [338/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:37.020 [339/738] Generating lib/rte_lpm_def with a custom command 00:02:37.020 [340/738] Generating lib/rte_lpm_mingw with a custom command 00:02:37.020 [341/738] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.020 [342/738] Linking target lib/librte_jobstats.so.23.0 00:02:37.278 [343/738] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.278 [344/738] Linking target lib/librte_cryptodev.so.23.0 00:02:37.278 [345/738] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:37.278 [346/738] Linking static target lib/librte_latencystats.a 00:02:37.278 [347/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:37.278 [348/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:37.278 [349/738] Linking static target lib/librte_ip_frag.a 00:02:37.278 [350/738] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:37.278 [351/738] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:37.278 [352/738] Generating lib/rte_member_def with a custom command 00:02:37.278 [353/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:37.278 [354/738] Generating lib/rte_member_mingw with a custom command 00:02:37.278 [355/738] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.278 [356/738] Generating lib/rte_pcapng_def with a custom command 00:02:37.535 [357/738] Generating lib/rte_pcapng_mingw with a custom command 00:02:37.535 [358/738] Linking target lib/librte_latencystats.so.23.0 00:02:37.535 [359/738] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.535 [360/738] Linking target lib/librte_ip_frag.so.23.0 00:02:37.535 [361/738] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:37.535 [362/738] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:37.535 [363/738] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:37.535 [364/738] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:37.535 [365/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:02:37.794 [366/738] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:37.794 [367/738] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:37.794 [368/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:37.794 [369/738] Linking static target lib/librte_lpm.a 00:02:37.794 [370/738] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.794 [371/738] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:37.794 [372/738] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:37.794 [373/738] Linking target lib/librte_eventdev.so.23.0 00:02:37.794 [374/738] Generating lib/rte_power_def with a custom command 00:02:37.794 [375/738] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:37.794 [376/738] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:37.794 [377/738] Linking static target lib/librte_pcapng.a 00:02:37.794 [378/738] Generating lib/rte_power_mingw with a custom command 00:02:37.794 [379/738] Generating lib/rte_rawdev_def with a custom command 00:02:38.052 [380/738] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:38.052 [381/738] Generating lib/rte_rawdev_mingw with a custom command 00:02:38.052 [382/738] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:38.052 [383/738] Generating lib/rte_regexdev_def with a custom command 00:02:38.052 [384/738] Generating lib/rte_regexdev_mingw with a custom command 00:02:38.052 [385/738] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.052 [386/738] Generating lib/rte_dmadev_def with a custom command 00:02:38.052 [387/738] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:38.052 [388/738] Linking target lib/librte_lpm.so.23.0 00:02:38.052 [389/738] Generating lib/rte_dmadev_mingw with a custom command 00:02:38.052 [390/738] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:38.052 [391/738] Generating lib/rte_rib_def with a custom command 00:02:38.052 [392/738] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:38.052 [393/738] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.052 [394/738] Generating lib/rte_rib_mingw with a custom command 00:02:38.052 [395/738] Linking target lib/librte_pcapng.so.23.0 00:02:38.052 [396/738] Generating lib/rte_reorder_def with a custom command 00:02:38.052 [397/738] Generating lib/rte_reorder_mingw with a custom command 00:02:38.311 [398/738] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:38.311 [399/738] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:38.311 [400/738] Linking static target lib/librte_rawdev.a 00:02:38.311 [401/738] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:38.311 [402/738] Linking static target lib/librte_power.a 00:02:38.311 [403/738] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:38.311 [404/738] Linking static target lib/librte_dmadev.a 00:02:38.311 [405/738] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:38.311 [406/738] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:38.311 [407/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:38.311 [408/738] Linking static target lib/librte_regexdev.a 00:02:38.311 [409/738] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:38.569 [410/738] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:38.569 [411/738] Generating lib/rte_sched_def with a custom command 00:02:38.569 [412/738] Generating lib/rte_sched_mingw with a custom command 00:02:38.569 [413/738] Generating lib/rte_security_def with a custom command 00:02:38.569 [414/738] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:38.569 [415/738] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.569 [416/738] Linking static target lib/librte_reorder.a 00:02:38.569 [417/738] Generating lib/rte_security_mingw with a custom command 00:02:38.569 [418/738] Linking target lib/librte_rawdev.so.23.0 00:02:38.569 [419/738] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.569 [420/738] Linking target lib/librte_dmadev.so.23.0 00:02:38.569 [421/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:38.569 [422/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:38.569 [423/738] Linking static target lib/librte_rib.a 00:02:38.569 [424/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:38.569 [425/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:38.569 [426/738] Generating lib/rte_stack_def with a custom command 00:02:38.569 [427/738] Linking static target lib/librte_stack.a 00:02:38.569 [428/738] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:38.569 [429/738] Generating lib/rte_stack_mingw with a custom command 00:02:38.569 [430/738] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.828 [431/738] Linking target lib/librte_reorder.so.23.0 00:02:38.828 [432/738] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:38.828 [433/738] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.828 [434/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:38.828 [435/738] Linking static target lib/librte_member.a 00:02:38.828 [436/738] Linking target lib/librte_stack.so.23.0 00:02:38.828 [437/738] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.828 [438/738] Linking target lib/librte_power.so.23.0 00:02:38.828 [439/738] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.828 [440/738] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.828 [441/738] Linking target lib/librte_regexdev.so.23.0 00:02:38.828 [442/738] Linking target lib/librte_rib.so.23.0 00:02:39.087 [443/738] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:39.087 [444/738] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:39.087 [445/738] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:39.087 [446/738] Linking static target lib/librte_security.a 00:02:39.087 [447/738] Generating lib/rte_vhost_def with a custom command 00:02:39.087 [448/738] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.087 [449/738] Generating lib/rte_vhost_mingw with a custom command 00:02:39.087 [450/738] Linking target lib/librte_member.so.23.0 00:02:39.087 [451/738] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:39.345 [452/738] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.345 [453/738] Linking target lib/librte_security.so.23.0 00:02:39.345 [454/738] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:39.345 [455/738] Linking static target lib/librte_sched.a 00:02:39.345 [456/738] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:39.345 [457/738] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:39.604 [458/738] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.604 [459/738] Linking target lib/librte_sched.so.23.0 00:02:39.604 [460/738] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:39.604 [461/738] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:39.604 [462/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:39.604 [463/738] Generating lib/rte_ipsec_def with a custom command 00:02:39.604 [464/738] Generating lib/rte_ipsec_mingw with a custom command 00:02:39.932 [465/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:39.932 [466/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:39.932 [467/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:39.932 [468/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:39.932 [469/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:39.932 [470/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:40.223 [471/738] Generating lib/rte_fib_def with a custom command 00:02:40.223 [472/738] Generating lib/rte_fib_mingw with a custom command 00:02:40.223 [473/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:40.223 [474/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:40.223 [475/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:40.223 [476/738] Linking static target lib/librte_ipsec.a 00:02:40.223 [477/738] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:40.481 [478/738] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.481 [479/738] Linking target lib/librte_ipsec.so.23.0 00:02:40.481 [480/738] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:40.481 [481/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:40.481 [482/738] Linking static target lib/librte_fib.a 00:02:40.740 [483/738] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:40.740 [484/738] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:40.740 [485/738] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.740 [486/738] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:40.740 [487/738] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:40.740 [488/738] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:40.740 [489/738] Linking target lib/librte_fib.so.23.0 00:02:40.999 [490/738] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:40.999 [491/738] Generating lib/rte_port_def with a custom command 00:02:40.999 [492/738] Generating lib/rte_port_mingw with a custom command 00:02:41.257 [493/738] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:41.257 [494/738] Generating lib/rte_pdump_def with a custom command 00:02:41.257 [495/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:41.257 [496/738] Generating lib/rte_pdump_mingw with a custom command 00:02:41.257 [497/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:41.257 [498/738] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:41.257 [499/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:41.257 [500/738] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:41.516 [501/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:41.516 [502/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:41.516 [503/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:41.516 [504/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:41.516 [505/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:41.516 [506/738] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:41.516 [507/738] Linking static target lib/librte_port.a 00:02:41.775 [508/738] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:41.775 [509/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:41.775 [510/738] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:41.775 [511/738] Linking static target lib/librte_pdump.a 00:02:41.775 [512/738] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:42.034 [513/738] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.034 [514/738] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.034 [515/738] Linking target lib/librte_pdump.so.23.0 00:02:42.034 [516/738] Linking target lib/librte_port.so.23.0 00:02:42.034 [517/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:42.034 [518/738] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:42.034 [519/738] Generating lib/rte_table_def with a custom command 00:02:42.034 [520/738] Generating lib/rte_table_mingw with a custom command 00:02:42.292 [521/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:42.292 [522/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:42.292 [523/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:42.292 [524/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:42.292 [525/738] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:42.292 [526/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:42.550 [527/738] Generating lib/rte_pipeline_def with a custom command 00:02:42.550 [528/738] Generating lib/rte_pipeline_mingw with a custom command 00:02:42.550 [529/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:42.550 [530/738] Linking static target lib/librte_table.a 00:02:42.550 [531/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:42.550 [532/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:42.808 [533/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:42.808 [534/738] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:42.808 [535/738] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:42.808 [536/738] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:42.808 [537/738] Generating lib/rte_graph_def with a custom command 00:02:42.808 [538/738] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.808 [539/738] Generating lib/rte_graph_mingw with a custom command 00:02:42.808 [540/738] Linking target lib/librte_table.so.23.0 00:02:43.065 [541/738] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:43.065 [542/738] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:43.065 [543/738] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:43.065 [544/738] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:43.065 [545/738] Linking static target lib/librte_graph.a 00:02:43.065 [546/738] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:43.323 [547/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:43.323 [548/738] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:43.323 [549/738] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:43.323 [550/738] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:43.581 [551/738] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:43.581 [552/738] Generating lib/rte_node_def with a custom command 00:02:43.581 [553/738] Generating lib/rte_node_mingw with a custom command 00:02:43.581 [554/738] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:43.581 [555/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:43.581 [556/738] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.581 [557/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:43.581 [558/738] Linking target lib/librte_graph.so.23.0 00:02:43.839 [559/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:43.839 [560/738] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:43.839 [561/738] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:43.839 [562/738] Generating drivers/rte_bus_pci_def with a custom command 00:02:43.839 [563/738] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:43.839 [564/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:43.839 [565/738] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:43.839 [566/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:43.839 [567/738] Generating drivers/rte_bus_vdev_def with a custom command 00:02:43.839 [568/738] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:43.839 [569/738] Generating drivers/rte_mempool_ring_def with a custom command 00:02:43.839 [570/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:43.839 [571/738] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:43.839 [572/738] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:43.839 [573/738] Linking static target lib/librte_node.a 00:02:44.098 [574/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:44.098 [575/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:44.098 [576/738] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:44.098 [577/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:44.098 [578/738] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:44.098 [579/738] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.098 [580/738] Linking target lib/librte_node.so.23.0 00:02:44.098 [581/738] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:44.098 [582/738] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:44.098 [583/738] Linking static target drivers/librte_bus_vdev.a 00:02:44.098 [584/738] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:44.098 [585/738] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:44.098 [586/738] Linking static target drivers/librte_bus_pci.a 00:02:44.357 [587/738] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.357 [588/738] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:44.357 [589/738] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:44.357 [590/738] Linking target drivers/librte_bus_vdev.so.23.0 00:02:44.357 [591/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:44.357 [592/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:44.357 [593/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:44.615 [594/738] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.615 [595/738] Linking target drivers/librte_bus_pci.so.23.0 00:02:44.615 [596/738] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:44.615 [597/738] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:44.615 [598/738] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:44.615 [599/738] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:44.615 [600/738] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:44.615 [601/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:44.615 [602/738] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:44.615 [603/738] Linking static target drivers/librte_mempool_ring.a 00:02:44.615 [604/738] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:44.873 [605/738] Linking target drivers/librte_mempool_ring.so.23.0 00:02:44.873 [606/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:45.132 [607/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:45.391 [608/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:45.391 [609/738] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:45.650 [610/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:45.650 [611/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:45.908 [612/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:45.908 [613/738] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:45.908 [614/738] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:45.908 [615/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:46.167 [616/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:46.167 [617/738] Generating drivers/rte_net_i40e_def with a custom command 00:02:46.167 [618/738] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:46.167 [619/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:46.735 [620/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:46.735 [621/738] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:46.993 [622/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:46.993 [623/738] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:46.993 [624/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:46.993 [625/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:47.251 [626/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:47.251 [627/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:47.251 [628/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:47.251 [629/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:47.251 [630/738] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:47.251 [631/738] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:47.509 [632/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:47.509 [633/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:47.509 [634/738] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:47.767 [635/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:47.767 [636/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:47.767 [637/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:47.767 [638/738] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:47.767 [639/738] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:47.767 [640/738] Linking static target drivers/librte_net_i40e.a 00:02:48.025 [641/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:48.025 [642/738] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:48.025 [643/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:48.025 [644/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:48.283 [645/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:48.283 [646/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:48.283 [647/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:48.283 [648/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:48.283 [649/738] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.540 [650/738] Linking target drivers/librte_net_i40e.so.23.0 00:02:48.540 [651/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:48.540 [652/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:48.540 [653/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:48.540 [654/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:48.797 [655/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:48.797 [656/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:48.797 [657/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:48.797 [658/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:48.797 [659/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:48.797 [660/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:49.055 [661/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:49.055 [662/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:49.055 [663/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:49.313 [664/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:49.571 [665/738] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:49.571 [666/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:49.571 [667/738] Linking static target lib/librte_vhost.a 00:02:49.571 [668/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:49.571 [669/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:49.829 [670/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:49.829 [671/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:49.829 [672/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:49.829 [673/738] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:49.829 [674/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:50.087 [675/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:50.087 [676/738] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:50.087 [677/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:50.345 [678/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:50.345 [679/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:50.345 [680/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:50.345 [681/738] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.345 [682/738] Linking target lib/librte_vhost.so.23.0 00:02:50.603 [683/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:50.603 [684/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:50.603 [685/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:50.603 [686/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:50.603 [687/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:50.603 [688/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:50.860 [689/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:50.860 [690/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:50.860 [691/738] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:51.118 [692/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:51.118 [693/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:51.118 [694/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:51.377 [695/738] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:51.377 [696/738] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:51.377 [697/738] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:51.636 [698/738] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:51.636 [699/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:51.895 [700/738] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:51.895 [701/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:51.895 [702/738] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:51.895 [703/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:51.895 [704/738] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:52.155 [705/738] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:52.155 [706/738] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:52.413 [707/738] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:52.671 [708/738] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:52.672 [709/738] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:52.672 [710/738] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:52.672 [711/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:52.672 [712/738] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:52.672 [713/738] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:52.672 [714/738] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:52.931 [715/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:52.931 [716/738] Linking static target lib/librte_pipeline.a 00:02:52.931 [717/738] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:53.190 [718/738] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:53.190 [719/738] Linking target app/dpdk-test-acl 00:02:53.190 [720/738] Linking target app/dpdk-dumpcap 00:02:53.190 [721/738] Linking target app/dpdk-test-cmdline 00:02:53.190 [722/738] Linking target app/dpdk-proc-info 00:02:53.190 [723/738] Linking target app/dpdk-test-bbdev 00:02:53.190 [724/738] Linking target app/dpdk-pdump 00:02:53.190 [725/738] Linking target app/dpdk-test-compress-perf 00:02:53.449 [726/738] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:53.449 [727/738] Linking target app/dpdk-test-crypto-perf 00:02:53.449 [728/738] Linking target app/dpdk-test-flow-perf 00:02:53.449 [729/738] Linking target app/dpdk-test-gpudev 00:02:53.449 [730/738] Linking target app/dpdk-test-eventdev 00:02:53.449 [731/738] Linking target app/dpdk-test-pipeline 00:02:53.449 [732/738] Linking target app/dpdk-test-fib 00:02:53.449 [733/738] Linking target app/dpdk-test-regex 00:02:53.449 [734/738] Linking target app/dpdk-testpmd 00:02:53.449 [735/738] Linking target app/dpdk-test-sad 00:02:53.708 [736/738] Linking target app/dpdk-test-security-perf 00:02:56.294 [737/738] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.294 [738/738] Linking target lib/librte_pipeline.so.23.0 00:02:56.294 06:28:06 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:56.294 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:56.294 [0/1] Installing files. 00:02:56.294 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:56.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:56.295 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:56.296 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:56.297 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:56.297 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:56.297 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:56.297 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:56.297 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:56.297 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.297 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.297 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.297 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.297 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.297 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.297 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.297 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.297 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.297 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.297 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.297 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.297 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.297 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.297 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.557 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.557 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.557 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.557 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.557 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.557 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.557 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.557 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.557 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.557 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.557 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.557 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.557 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.557 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.557 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.557 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.557 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.557 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.557 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.557 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.557 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.557 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.557 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:56.558 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:56.558 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:56.558 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:56.558 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:56.558 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.558 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.558 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.558 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.558 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.558 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.558 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.558 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.558 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.558 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.558 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.558 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.558 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.558 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.558 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.558 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.558 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.558 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.558 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.558 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:56.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.558 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.559 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.560 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:56.561 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:56.561 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:02:56.561 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:56.561 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:02:56.561 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:56.561 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:02:56.561 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:56.561 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:02:56.561 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:56.561 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:02:56.561 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:56.561 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:02:56.561 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:56.561 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:02:56.561 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:56.561 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:02:56.561 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:56.561 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:02:56.561 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:56.561 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:02:56.561 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:56.561 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:02:56.561 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:56.561 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:02:56.561 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:56.561 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:02:56.561 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:56.561 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:02:56.561 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:56.561 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:02:56.561 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:56.561 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:02:56.561 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:56.561 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:02:56.561 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:56.561 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:02:56.561 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:56.561 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:02:56.561 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:56.561 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:02:56.561 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:56.562 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:02:56.562 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:56.562 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:02:56.562 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:56.562 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:02:56.562 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:56.562 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:02:56.562 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:56.562 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:02:56.562 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:56.562 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:02:56.562 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:56.562 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:56.562 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:56.562 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:56.562 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:56.562 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:56.562 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:56.562 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:56.562 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:56.562 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:56.562 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:56.562 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:56.562 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:56.562 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:02:56.562 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:56.562 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:02:56.562 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:56.562 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:02:56.562 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:56.562 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:02:56.562 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:56.562 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:02:56.562 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:56.562 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:02:56.562 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:56.562 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:02:56.562 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:56.562 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:02:56.562 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:56.562 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:02:56.562 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:56.562 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:02:56.562 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:56.562 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:02:56.562 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:56.562 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:02:56.562 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:56.562 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:02:56.562 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:56.562 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:02:56.562 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:56.562 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:02:56.562 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:56.562 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:02:56.562 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:56.562 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:02:56.562 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:56.562 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:02:56.562 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:56.562 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:02:56.562 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:56.562 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:02:56.562 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:56.562 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:02:56.562 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:56.562 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:02:56.562 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:56.562 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:02:56.562 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:56.562 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:02:56.562 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:56.562 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:02:56.562 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:56.562 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:02:56.562 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:56.562 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:56.562 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:56.562 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:56.562 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:56.562 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:56.562 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:56.562 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:56.562 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:56.562 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:56.821 06:28:07 -- common/autobuild_common.sh@192 -- $ uname -s 00:02:56.821 06:28:07 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:56.821 06:28:07 -- common/autobuild_common.sh@203 -- $ cat 00:02:56.821 06:28:07 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:56.821 00:02:56.821 real 0m34.234s 00:02:56.821 user 3m37.463s 00:02:56.821 sys 0m34.435s 00:02:56.821 06:28:07 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:56.821 06:28:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:56.821 ************************************ 00:02:56.821 END TEST build_native_dpdk 00:02:56.821 ************************************ 00:02:56.821 06:28:07 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:56.821 06:28:07 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:56.821 06:28:07 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:56.821 06:28:07 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:56.821 06:28:07 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:56.821 06:28:07 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:56.821 06:28:07 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:56.821 06:28:07 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:02:56.821 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:57.079 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:57.079 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:57.079 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:57.079 Using 'verbs' RDMA provider 00:03:07.978 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:17.975 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:17.975 Creating mk/config.mk...done. 00:03:17.975 Creating mk/cc.flags.mk...done. 00:03:17.975 Type 'make' to build. 00:03:17.975 06:28:28 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:17.975 06:28:28 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:03:17.975 06:28:28 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:03:17.975 06:28:28 -- common/autotest_common.sh@10 -- $ set +x 00:03:17.975 ************************************ 00:03:17.975 START TEST make 00:03:17.975 ************************************ 00:03:17.975 06:28:28 -- common/autotest_common.sh@1114 -- $ make -j10 00:03:18.233 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:18.233 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:18.233 meson setup builddir \ 00:03:18.233 -Dwith-libaio=enabled \ 00:03:18.233 -Dwith-liburing=enabled \ 00:03:18.233 -Dwith-libvfn=disabled \ 00:03:18.233 -Dwith-spdk=false && \ 00:03:18.233 meson compile -C builddir && \ 00:03:18.233 cd -) 00:03:18.233 make[1]: Nothing to be done for 'all'. 00:03:20.765 The Meson build system 00:03:20.765 Version: 1.5.0 00:03:20.765 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:20.765 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:20.765 Build type: native build 00:03:20.765 Project name: xnvme 00:03:20.765 Project version: 0.7.3 00:03:20.765 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:20.765 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:20.765 Host machine cpu family: x86_64 00:03:20.765 Host machine cpu: x86_64 00:03:20.765 Message: host_machine.system: linux 00:03:20.765 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:20.765 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:20.765 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:20.765 Run-time dependency threads found: YES 00:03:20.765 Has header "setupapi.h" : NO 00:03:20.765 Has header "linux/blkzoned.h" : YES 00:03:20.765 Has header "linux/blkzoned.h" : YES (cached) 00:03:20.765 Has header "libaio.h" : YES 00:03:20.765 Library aio found: YES 00:03:20.765 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:20.765 Run-time dependency liburing found: YES 2.2 00:03:20.765 Dependency libvfn skipped: feature with-libvfn disabled 00:03:20.765 Run-time dependency appleframeworks found: NO (tried framework) 00:03:20.765 Run-time dependency appleframeworks found: NO (tried framework) 00:03:20.765 Configuring xnvme_config.h using configuration 00:03:20.765 Configuring xnvme.spec using configuration 00:03:20.765 Run-time dependency bash-completion found: YES 2.11 00:03:20.765 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:20.765 Program cp found: YES (/usr/bin/cp) 00:03:20.765 Has header "winsock2.h" : NO 00:03:20.765 Has header "dbghelp.h" : NO 00:03:20.765 Library rpcrt4 found: NO 00:03:20.765 Library rt found: YES 00:03:20.765 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:20.765 Found CMake: /usr/bin/cmake (3.27.7) 00:03:20.765 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:03:20.765 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:03:20.765 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:03:20.765 Build targets in project: 32 00:03:20.765 00:03:20.765 xnvme 0.7.3 00:03:20.765 00:03:20.765 User defined options 00:03:20.765 with-libaio : enabled 00:03:20.765 with-liburing: enabled 00:03:20.765 with-libvfn : disabled 00:03:20.765 with-spdk : false 00:03:20.765 00:03:20.766 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:20.766 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:20.766 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:03:20.766 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:03:20.766 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:03:20.766 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:03:20.766 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:03:20.766 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:03:20.766 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:03:20.766 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:03:20.766 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:03:20.766 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:03:20.766 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:03:20.766 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:03:20.766 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:03:20.766 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:03:20.766 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:03:20.766 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:03:20.766 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:03:21.025 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:03:21.025 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:03:21.025 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:03:21.025 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:03:21.025 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:03:21.025 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:03:21.025 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:03:21.025 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:03:21.025 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:03:21.025 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:03:21.025 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:03:21.025 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:03:21.025 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:03:21.025 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:03:21.025 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:03:21.025 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:03:21.025 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:03:21.025 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:03:21.025 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:03:21.025 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:03:21.025 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:03:21.025 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:03:21.025 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:03:21.025 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:03:21.025 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:03:21.025 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:03:21.025 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:03:21.025 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:03:21.025 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:03:21.025 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:03:21.025 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:03:21.025 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:03:21.025 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:03:21.025 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:03:21.025 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:03:21.025 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:03:21.025 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:03:21.283 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:03:21.283 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:03:21.283 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:03:21.283 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:03:21.283 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:03:21.283 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:03:21.283 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:03:21.283 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:03:21.283 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:03:21.283 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:03:21.283 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:03:21.283 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:03:21.283 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:03:21.283 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:03:21.283 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:03:21.283 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:03:21.283 [71/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:03:21.283 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:03:21.284 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:03:21.284 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:03:21.284 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:03:21.543 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:03:21.543 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:03:21.543 [78/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:03:21.543 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:03:21.543 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:03:21.543 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:03:21.543 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:03:21.543 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:03:21.543 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:03:21.543 [85/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:03:21.543 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:03:21.543 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:03:21.543 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:03:21.543 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:03:21.543 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:03:21.543 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:03:21.543 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:03:21.543 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:03:21.543 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:03:21.543 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:03:21.543 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:03:21.543 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:03:21.543 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:03:21.543 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:03:21.543 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:03:21.543 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:03:21.543 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:03:21.802 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:03:21.802 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:03:21.802 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:03:21.802 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:03:21.802 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:03:21.802 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:03:21.802 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:03:21.802 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:03:21.802 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:03:21.802 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:03:21.802 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:03:21.802 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:03:21.802 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:03:21.802 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:03:21.802 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:03:21.802 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:03:21.802 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:03:21.802 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:03:21.802 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:03:21.802 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:03:21.802 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:03:21.802 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:03:21.802 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:03:21.802 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:03:21.802 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:03:21.802 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:03:21.802 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:03:21.802 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:03:21.802 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:03:21.802 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:03:21.802 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:03:21.802 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:03:22.061 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:03:22.061 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:03:22.061 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:03:22.061 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:03:22.061 [139/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:03:22.061 [140/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:03:22.061 [141/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:03:22.061 [142/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:03:22.061 [143/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:03:22.061 [144/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:03:22.061 [145/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:03:22.061 [146/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:03:22.061 [147/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:03:22.061 [148/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:03:22.061 [149/203] Linking target lib/libxnvme.so 00:03:22.061 [150/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:03:22.061 [151/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:03:22.061 [152/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:03:22.061 [153/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:03:22.061 [154/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:03:22.319 [155/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:03:22.319 [156/203] Compiling C object tools/xdd.p/xdd.c.o 00:03:22.319 [157/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:03:22.319 [158/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:03:22.319 [159/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:03:22.319 [160/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:03:22.319 [161/203] Compiling C object tools/lblk.p/lblk.c.o 00:03:22.319 [162/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:03:22.319 [163/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:03:22.319 [164/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:03:22.319 [165/203] Compiling C object tools/kvs.p/kvs.c.o 00:03:22.319 [166/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:03:22.319 [167/203] Compiling C object tools/zoned.p/zoned.c.o 00:03:22.319 [168/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:03:22.319 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:03:22.319 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:03:22.319 [171/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:03:22.319 [172/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:03:22.578 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:03:22.578 [174/203] Linking static target lib/libxnvme.a 00:03:22.578 [175/203] Linking target tests/xnvme_tests_cli 00:03:22.578 [176/203] Linking target tests/xnvme_tests_buf 00:03:22.578 [177/203] Linking target tests/xnvme_tests_async_intf 00:03:22.578 [178/203] Linking target tests/xnvme_tests_lblk 00:03:22.578 [179/203] Linking target tests/xnvme_tests_scc 00:03:22.578 [180/203] Linking target tests/xnvme_tests_enum 00:03:22.578 [181/203] Linking target tests/xnvme_tests_xnvme_file 00:03:22.578 [182/203] Linking target tests/xnvme_tests_ioworker 00:03:22.578 [183/203] Linking target tests/xnvme_tests_xnvme_cli 00:03:22.578 [184/203] Linking target tests/xnvme_tests_znd_state 00:03:22.578 [185/203] Linking target tests/xnvme_tests_znd_explicit_open 00:03:22.578 [186/203] Linking target tests/xnvme_tests_znd_zrwa 00:03:22.578 [187/203] Linking target tests/xnvme_tests_znd_append 00:03:22.578 [188/203] Linking target tests/xnvme_tests_kvs 00:03:22.578 [189/203] Linking target tools/lblk 00:03:22.578 [190/203] Linking target tests/xnvme_tests_map 00:03:22.578 [191/203] Linking target tools/xdd 00:03:22.578 [192/203] Linking target tools/xnvme 00:03:22.578 [193/203] Linking target tools/xnvme_file 00:03:22.578 [194/203] Linking target examples/xnvme_hello 00:03:22.578 [195/203] Linking target examples/xnvme_dev 00:03:22.578 [196/203] Linking target tools/kvs 00:03:22.578 [197/203] Linking target examples/zoned_io_async 00:03:22.578 [198/203] Linking target tools/zoned 00:03:22.578 [199/203] Linking target examples/xnvme_io_async 00:03:22.578 [200/203] Linking target examples/xnvme_enum 00:03:22.578 [201/203] Linking target examples/xnvme_single_async 00:03:22.578 [202/203] Linking target examples/zoned_io_sync 00:03:22.578 [203/203] Linking target examples/xnvme_single_sync 00:03:22.578 INFO: autodetecting backend as ninja 00:03:22.578 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:22.837 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:35.040 CC lib/ut_mock/mock.o 00:03:35.040 CC lib/ut/ut.o 00:03:35.040 CC lib/log/log.o 00:03:35.040 CC lib/log/log_flags.o 00:03:35.040 CC lib/log/log_deprecated.o 00:03:35.040 LIB libspdk_ut_mock.a 00:03:35.040 SO libspdk_ut_mock.so.5.0 00:03:35.040 LIB libspdk_ut.a 00:03:35.040 SO libspdk_ut.so.1.0 00:03:35.040 LIB libspdk_log.a 00:03:35.040 SYMLINK libspdk_ut_mock.so 00:03:35.040 SYMLINK libspdk_ut.so 00:03:35.040 SO libspdk_log.so.6.1 00:03:35.040 SYMLINK libspdk_log.so 00:03:35.040 CC lib/util/base64.o 00:03:35.040 CC lib/ioat/ioat.o 00:03:35.040 CC lib/dma/dma.o 00:03:35.040 CC lib/util/bit_array.o 00:03:35.040 CC lib/util/cpuset.o 00:03:35.040 CXX lib/trace_parser/trace.o 00:03:35.040 CC lib/util/crc32.o 00:03:35.040 CC lib/util/crc16.o 00:03:35.040 CC lib/util/crc32c.o 00:03:35.040 CC lib/vfio_user/host/vfio_user_pci.o 00:03:35.040 CC lib/util/crc32_ieee.o 00:03:35.040 CC lib/util/crc64.o 00:03:35.040 CC lib/util/dif.o 00:03:35.040 LIB libspdk_dma.a 00:03:35.040 SO libspdk_dma.so.3.0 00:03:35.040 CC lib/util/fd.o 00:03:35.040 CC lib/util/file.o 00:03:35.040 SYMLINK libspdk_dma.so 00:03:35.040 CC lib/util/hexlify.o 00:03:35.040 CC lib/util/iov.o 00:03:35.040 CC lib/util/math.o 00:03:35.040 CC lib/vfio_user/host/vfio_user.o 00:03:35.040 CC lib/util/pipe.o 00:03:35.040 LIB libspdk_ioat.a 00:03:35.040 CC lib/util/strerror_tls.o 00:03:35.040 SO libspdk_ioat.so.6.0 00:03:35.040 CC lib/util/string.o 00:03:35.040 SYMLINK libspdk_ioat.so 00:03:35.040 CC lib/util/uuid.o 00:03:35.040 CC lib/util/fd_group.o 00:03:35.040 CC lib/util/xor.o 00:03:35.040 CC lib/util/zipf.o 00:03:35.040 LIB libspdk_vfio_user.a 00:03:35.040 SO libspdk_vfio_user.so.4.0 00:03:35.040 SYMLINK libspdk_vfio_user.so 00:03:35.299 LIB libspdk_trace_parser.a 00:03:35.299 LIB libspdk_util.a 00:03:35.299 SO libspdk_trace_parser.so.4.0 00:03:35.299 SO libspdk_util.so.8.0 00:03:35.299 SYMLINK libspdk_trace_parser.so 00:03:35.299 SYMLINK libspdk_util.so 00:03:35.557 CC lib/env_dpdk/env.o 00:03:35.557 CC lib/env_dpdk/memory.o 00:03:35.557 CC lib/json/json_parse.o 00:03:35.557 CC lib/env_dpdk/init.o 00:03:35.557 CC lib/json/json_util.o 00:03:35.557 CC lib/conf/conf.o 00:03:35.557 CC lib/env_dpdk/pci.o 00:03:35.557 CC lib/idxd/idxd.o 00:03:35.557 CC lib/rdma/common.o 00:03:35.557 CC lib/vmd/vmd.o 00:03:35.557 LIB libspdk_conf.a 00:03:35.557 SO libspdk_conf.so.5.0 00:03:35.816 SYMLINK libspdk_conf.so 00:03:35.816 CC lib/vmd/led.o 00:03:35.816 CC lib/rdma/rdma_verbs.o 00:03:35.816 CC lib/json/json_write.o 00:03:35.816 CC lib/env_dpdk/threads.o 00:03:35.816 CC lib/env_dpdk/pci_ioat.o 00:03:35.816 CC lib/env_dpdk/pci_virtio.o 00:03:35.816 CC lib/env_dpdk/pci_vmd.o 00:03:35.816 CC lib/env_dpdk/pci_idxd.o 00:03:35.816 CC lib/env_dpdk/pci_event.o 00:03:35.816 CC lib/env_dpdk/sigbus_handler.o 00:03:35.816 CC lib/env_dpdk/pci_dpdk.o 00:03:35.816 LIB libspdk_rdma.a 00:03:35.816 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:35.816 SO libspdk_rdma.so.5.0 00:03:36.075 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:36.075 SYMLINK libspdk_rdma.so 00:03:36.075 CC lib/idxd/idxd_user.o 00:03:36.075 CC lib/idxd/idxd_kernel.o 00:03:36.075 LIB libspdk_json.a 00:03:36.075 SO libspdk_json.so.5.1 00:03:36.075 SYMLINK libspdk_json.so 00:03:36.075 LIB libspdk_vmd.a 00:03:36.075 SO libspdk_vmd.so.5.0 00:03:36.075 LIB libspdk_idxd.a 00:03:36.333 CC lib/jsonrpc/jsonrpc_server.o 00:03:36.333 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:36.333 CC lib/jsonrpc/jsonrpc_client.o 00:03:36.333 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:36.333 SO libspdk_idxd.so.11.0 00:03:36.333 SYMLINK libspdk_vmd.so 00:03:36.333 SYMLINK libspdk_idxd.so 00:03:36.333 LIB libspdk_jsonrpc.a 00:03:36.593 SO libspdk_jsonrpc.so.5.1 00:03:36.593 SYMLINK libspdk_jsonrpc.so 00:03:36.593 LIB libspdk_env_dpdk.a 00:03:36.593 SO libspdk_env_dpdk.so.13.0 00:03:36.593 CC lib/rpc/rpc.o 00:03:36.854 SYMLINK libspdk_env_dpdk.so 00:03:36.854 LIB libspdk_rpc.a 00:03:36.854 SO libspdk_rpc.so.5.0 00:03:36.854 SYMLINK libspdk_rpc.so 00:03:37.114 CC lib/trace/trace_rpc.o 00:03:37.114 CC lib/trace/trace.o 00:03:37.114 CC lib/trace/trace_flags.o 00:03:37.114 CC lib/notify/notify.o 00:03:37.114 CC lib/notify/notify_rpc.o 00:03:37.114 CC lib/sock/sock.o 00:03:37.114 CC lib/sock/sock_rpc.o 00:03:37.114 LIB libspdk_notify.a 00:03:37.373 SO libspdk_notify.so.5.0 00:03:37.373 LIB libspdk_trace.a 00:03:37.373 SO libspdk_trace.so.9.0 00:03:37.373 SYMLINK libspdk_notify.so 00:03:37.373 SYMLINK libspdk_trace.so 00:03:37.373 LIB libspdk_sock.a 00:03:37.373 CC lib/thread/thread.o 00:03:37.373 SO libspdk_sock.so.8.0 00:03:37.373 CC lib/thread/iobuf.o 00:03:37.631 SYMLINK libspdk_sock.so 00:03:37.631 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:37.631 CC lib/nvme/nvme_ctrlr.o 00:03:37.631 CC lib/nvme/nvme_ns_cmd.o 00:03:37.631 CC lib/nvme/nvme_fabric.o 00:03:37.631 CC lib/nvme/nvme_pcie.o 00:03:37.631 CC lib/nvme/nvme_pcie_common.o 00:03:37.631 CC lib/nvme/nvme_qpair.o 00:03:37.631 CC lib/nvme/nvme_ns.o 00:03:37.890 CC lib/nvme/nvme.o 00:03:38.148 CC lib/nvme/nvme_quirks.o 00:03:38.406 CC lib/nvme/nvme_transport.o 00:03:38.406 CC lib/nvme/nvme_discovery.o 00:03:38.406 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:38.406 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:38.406 CC lib/nvme/nvme_tcp.o 00:03:38.664 CC lib/nvme/nvme_opal.o 00:03:38.664 CC lib/nvme/nvme_io_msg.o 00:03:38.664 CC lib/nvme/nvme_poll_group.o 00:03:38.922 CC lib/nvme/nvme_zns.o 00:03:38.922 CC lib/nvme/nvme_cuse.o 00:03:38.922 CC lib/nvme/nvme_vfio_user.o 00:03:38.922 CC lib/nvme/nvme_rdma.o 00:03:38.922 LIB libspdk_thread.a 00:03:38.922 SO libspdk_thread.so.9.0 00:03:39.180 SYMLINK libspdk_thread.so 00:03:39.180 CC lib/accel/accel.o 00:03:39.180 CC lib/blob/blobstore.o 00:03:39.180 CC lib/accel/accel_rpc.o 00:03:39.180 CC lib/init/json_config.o 00:03:39.180 CC lib/virtio/virtio.o 00:03:39.437 CC lib/virtio/virtio_vhost_user.o 00:03:39.437 CC lib/virtio/virtio_vfio_user.o 00:03:39.437 CC lib/init/subsystem.o 00:03:39.437 CC lib/init/subsystem_rpc.o 00:03:39.695 CC lib/accel/accel_sw.o 00:03:39.695 CC lib/virtio/virtio_pci.o 00:03:39.695 CC lib/init/rpc.o 00:03:39.695 CC lib/blob/request.o 00:03:39.695 CC lib/blob/zeroes.o 00:03:39.695 LIB libspdk_init.a 00:03:39.695 SO libspdk_init.so.4.0 00:03:39.695 CC lib/blob/blob_bs_dev.o 00:03:39.695 SYMLINK libspdk_init.so 00:03:39.954 LIB libspdk_virtio.a 00:03:39.954 CC lib/event/app.o 00:03:39.954 CC lib/event/log_rpc.o 00:03:39.954 CC lib/event/reactor.o 00:03:39.954 CC lib/event/app_rpc.o 00:03:39.954 SO libspdk_virtio.so.6.0 00:03:39.954 CC lib/event/scheduler_static.o 00:03:39.954 SYMLINK libspdk_virtio.so 00:03:40.212 LIB libspdk_accel.a 00:03:40.212 SO libspdk_accel.so.14.0 00:03:40.212 LIB libspdk_nvme.a 00:03:40.212 SYMLINK libspdk_accel.so 00:03:40.471 LIB libspdk_event.a 00:03:40.471 SO libspdk_nvme.so.12.0 00:03:40.471 SO libspdk_event.so.12.0 00:03:40.471 CC lib/bdev/bdev.o 00:03:40.471 CC lib/bdev/bdev_rpc.o 00:03:40.471 CC lib/bdev/bdev_zone.o 00:03:40.471 CC lib/bdev/scsi_nvme.o 00:03:40.471 CC lib/bdev/part.o 00:03:40.471 SYMLINK libspdk_event.so 00:03:40.471 SYMLINK libspdk_nvme.so 00:03:42.385 LIB libspdk_blob.a 00:03:42.647 SO libspdk_blob.so.10.1 00:03:42.647 SYMLINK libspdk_blob.so 00:03:42.908 CC lib/blobfs/blobfs.o 00:03:42.908 CC lib/blobfs/tree.o 00:03:42.908 CC lib/lvol/lvol.o 00:03:43.478 LIB libspdk_bdev.a 00:03:43.478 SO libspdk_bdev.so.14.0 00:03:43.478 SYMLINK libspdk_bdev.so 00:03:43.737 LIB libspdk_blobfs.a 00:03:43.737 CC lib/ublk/ublk.o 00:03:43.737 CC lib/ublk/ublk_rpc.o 00:03:43.737 CC lib/ftl/ftl_core.o 00:03:43.737 CC lib/ftl/ftl_init.o 00:03:43.737 CC lib/scsi/lun.o 00:03:43.737 CC lib/scsi/dev.o 00:03:43.737 CC lib/nbd/nbd.o 00:03:43.737 CC lib/nvmf/ctrlr.o 00:03:43.737 SO libspdk_blobfs.so.9.0 00:03:43.737 SYMLINK libspdk_blobfs.so 00:03:43.737 CC lib/nbd/nbd_rpc.o 00:03:43.737 CC lib/nvmf/ctrlr_discovery.o 00:03:43.737 LIB libspdk_lvol.a 00:03:43.737 SO libspdk_lvol.so.9.1 00:03:43.999 CC lib/ftl/ftl_layout.o 00:03:43.999 CC lib/nvmf/ctrlr_bdev.o 00:03:43.999 CC lib/nvmf/subsystem.o 00:03:43.999 SYMLINK libspdk_lvol.so 00:03:43.999 CC lib/scsi/port.o 00:03:43.999 CC lib/nvmf/nvmf.o 00:03:43.999 LIB libspdk_nbd.a 00:03:43.999 SO libspdk_nbd.so.6.0 00:03:43.999 CC lib/nvmf/nvmf_rpc.o 00:03:43.999 SYMLINK libspdk_nbd.so 00:03:43.999 CC lib/scsi/scsi.o 00:03:43.999 CC lib/nvmf/transport.o 00:03:44.257 CC lib/scsi/scsi_bdev.o 00:03:44.257 CC lib/ftl/ftl_debug.o 00:03:44.257 LIB libspdk_ublk.a 00:03:44.257 CC lib/nvmf/tcp.o 00:03:44.257 SO libspdk_ublk.so.2.0 00:03:44.257 CC lib/ftl/ftl_io.o 00:03:44.257 SYMLINK libspdk_ublk.so 00:03:44.257 CC lib/scsi/scsi_pr.o 00:03:44.257 CC lib/nvmf/rdma.o 00:03:44.515 CC lib/scsi/scsi_rpc.o 00:03:44.515 CC lib/ftl/ftl_sb.o 00:03:44.515 CC lib/ftl/ftl_l2p.o 00:03:44.773 CC lib/scsi/task.o 00:03:44.773 CC lib/ftl/ftl_l2p_flat.o 00:03:44.773 CC lib/ftl/ftl_nv_cache.o 00:03:44.773 CC lib/ftl/ftl_band.o 00:03:44.773 CC lib/ftl/ftl_band_ops.o 00:03:44.773 CC lib/ftl/ftl_writer.o 00:03:44.773 CC lib/ftl/ftl_rq.o 00:03:44.773 LIB libspdk_scsi.a 00:03:45.031 SO libspdk_scsi.so.8.0 00:03:45.031 SYMLINK libspdk_scsi.so 00:03:45.031 CC lib/ftl/ftl_reloc.o 00:03:45.031 CC lib/ftl/ftl_l2p_cache.o 00:03:45.031 CC lib/iscsi/conn.o 00:03:45.031 CC lib/iscsi/init_grp.o 00:03:45.031 CC lib/vhost/vhost.o 00:03:45.031 CC lib/iscsi/iscsi.o 00:03:45.289 CC lib/iscsi/md5.o 00:03:45.289 CC lib/iscsi/param.o 00:03:45.289 CC lib/iscsi/portal_grp.o 00:03:45.546 CC lib/iscsi/tgt_node.o 00:03:45.546 CC lib/iscsi/iscsi_subsystem.o 00:03:45.546 CC lib/ftl/ftl_p2l.o 00:03:45.546 CC lib/iscsi/iscsi_rpc.o 00:03:45.546 CC lib/vhost/vhost_rpc.o 00:03:45.546 CC lib/vhost/vhost_scsi.o 00:03:45.804 CC lib/iscsi/task.o 00:03:45.804 CC lib/vhost/vhost_blk.o 00:03:45.804 CC lib/vhost/rte_vhost_user.o 00:03:45.804 CC lib/ftl/mngt/ftl_mngt.o 00:03:45.804 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:45.804 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:46.062 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:46.062 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:46.062 LIB libspdk_nvmf.a 00:03:46.062 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:46.062 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:46.062 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:46.062 SO libspdk_nvmf.so.17.0 00:03:46.062 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:46.322 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:46.322 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:46.322 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:46.322 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:46.322 SYMLINK libspdk_nvmf.so 00:03:46.322 CC lib/ftl/utils/ftl_conf.o 00:03:46.322 CC lib/ftl/utils/ftl_md.o 00:03:46.322 CC lib/ftl/utils/ftl_mempool.o 00:03:46.322 CC lib/ftl/utils/ftl_bitmap.o 00:03:46.580 CC lib/ftl/utils/ftl_property.o 00:03:46.580 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:46.580 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:46.580 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:46.580 LIB libspdk_iscsi.a 00:03:46.580 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:46.580 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:46.580 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:46.580 SO libspdk_iscsi.so.7.0 00:03:46.580 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:46.580 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:46.580 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:46.580 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:46.580 CC lib/ftl/base/ftl_base_dev.o 00:03:46.580 CC lib/ftl/base/ftl_base_bdev.o 00:03:46.580 CC lib/ftl/ftl_trace.o 00:03:46.580 LIB libspdk_vhost.a 00:03:46.580 SYMLINK libspdk_iscsi.so 00:03:46.841 SO libspdk_vhost.so.7.1 00:03:46.841 SYMLINK libspdk_vhost.so 00:03:46.841 LIB libspdk_ftl.a 00:03:47.101 SO libspdk_ftl.so.8.0 00:03:47.101 SYMLINK libspdk_ftl.so 00:03:47.361 CC module/env_dpdk/env_dpdk_rpc.o 00:03:47.361 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:47.361 CC module/sock/posix/posix.o 00:03:47.361 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:47.361 CC module/scheduler/gscheduler/gscheduler.o 00:03:47.361 CC module/blob/bdev/blob_bdev.o 00:03:47.361 CC module/accel/iaa/accel_iaa.o 00:03:47.361 CC module/accel/dsa/accel_dsa.o 00:03:47.361 CC module/accel/ioat/accel_ioat.o 00:03:47.361 CC module/accel/error/accel_error.o 00:03:47.361 LIB libspdk_env_dpdk_rpc.a 00:03:47.361 SO libspdk_env_dpdk_rpc.so.5.0 00:03:47.619 LIB libspdk_scheduler_dpdk_governor.a 00:03:47.619 SYMLINK libspdk_env_dpdk_rpc.so 00:03:47.619 CC module/accel/error/accel_error_rpc.o 00:03:47.619 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:47.619 LIB libspdk_scheduler_gscheduler.a 00:03:47.619 CC module/accel/ioat/accel_ioat_rpc.o 00:03:47.619 CC module/accel/iaa/accel_iaa_rpc.o 00:03:47.619 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:47.619 CC module/accel/dsa/accel_dsa_rpc.o 00:03:47.619 SO libspdk_scheduler_gscheduler.so.3.0 00:03:47.619 LIB libspdk_scheduler_dynamic.a 00:03:47.619 SO libspdk_scheduler_dynamic.so.3.0 00:03:47.619 LIB libspdk_blob_bdev.a 00:03:47.619 SYMLINK libspdk_scheduler_gscheduler.so 00:03:47.619 LIB libspdk_accel_error.a 00:03:47.619 SYMLINK libspdk_scheduler_dynamic.so 00:03:47.619 SO libspdk_blob_bdev.so.10.1 00:03:47.619 SO libspdk_accel_error.so.1.0 00:03:47.619 LIB libspdk_accel_ioat.a 00:03:47.619 LIB libspdk_accel_dsa.a 00:03:47.619 SYMLINK libspdk_blob_bdev.so 00:03:47.619 LIB libspdk_accel_iaa.a 00:03:47.619 SO libspdk_accel_ioat.so.5.0 00:03:47.619 SO libspdk_accel_dsa.so.4.0 00:03:47.619 SYMLINK libspdk_accel_error.so 00:03:47.619 SO libspdk_accel_iaa.so.2.0 00:03:47.877 SYMLINK libspdk_accel_ioat.so 00:03:47.877 SYMLINK libspdk_accel_dsa.so 00:03:47.877 SYMLINK libspdk_accel_iaa.so 00:03:47.877 CC module/blobfs/bdev/blobfs_bdev.o 00:03:47.877 CC module/bdev/malloc/bdev_malloc.o 00:03:47.877 CC module/bdev/lvol/vbdev_lvol.o 00:03:47.877 CC module/bdev/delay/vbdev_delay.o 00:03:47.877 CC module/bdev/gpt/gpt.o 00:03:47.877 CC module/bdev/null/bdev_null.o 00:03:47.877 CC module/bdev/nvme/bdev_nvme.o 00:03:47.877 CC module/bdev/error/vbdev_error.o 00:03:47.877 CC module/bdev/passthru/vbdev_passthru.o 00:03:47.877 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:47.877 LIB libspdk_sock_posix.a 00:03:47.877 SO libspdk_sock_posix.so.5.0 00:03:48.135 CC module/bdev/gpt/vbdev_gpt.o 00:03:48.135 LIB libspdk_blobfs_bdev.a 00:03:48.135 SYMLINK libspdk_sock_posix.so 00:03:48.135 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:48.135 CC module/bdev/null/bdev_null_rpc.o 00:03:48.135 SO libspdk_blobfs_bdev.so.5.0 00:03:48.135 CC module/bdev/error/vbdev_error_rpc.o 00:03:48.135 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:48.135 SYMLINK libspdk_blobfs_bdev.so 00:03:48.135 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:48.135 CC module/bdev/nvme/nvme_rpc.o 00:03:48.135 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:48.135 LIB libspdk_bdev_error.a 00:03:48.135 LIB libspdk_bdev_passthru.a 00:03:48.135 LIB libspdk_bdev_null.a 00:03:48.135 LIB libspdk_bdev_delay.a 00:03:48.135 SO libspdk_bdev_passthru.so.5.0 00:03:48.135 SO libspdk_bdev_error.so.5.0 00:03:48.135 LIB libspdk_bdev_gpt.a 00:03:48.393 SO libspdk_bdev_delay.so.5.0 00:03:48.393 SO libspdk_bdev_null.so.5.0 00:03:48.393 SYMLINK libspdk_bdev_passthru.so 00:03:48.393 SO libspdk_bdev_gpt.so.5.0 00:03:48.393 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:48.393 SYMLINK libspdk_bdev_error.so 00:03:48.393 LIB libspdk_bdev_malloc.a 00:03:48.393 SYMLINK libspdk_bdev_delay.so 00:03:48.393 SYMLINK libspdk_bdev_null.so 00:03:48.393 SO libspdk_bdev_malloc.so.5.0 00:03:48.393 SYMLINK libspdk_bdev_gpt.so 00:03:48.393 CC module/bdev/nvme/bdev_mdns_client.o 00:03:48.393 CC module/bdev/nvme/vbdev_opal.o 00:03:48.393 SYMLINK libspdk_bdev_malloc.so 00:03:48.393 CC module/bdev/split/vbdev_split.o 00:03:48.393 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:48.393 CC module/bdev/raid/bdev_raid.o 00:03:48.393 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:48.393 CC module/bdev/xnvme/bdev_xnvme.o 00:03:48.393 CC module/bdev/split/vbdev_split_rpc.o 00:03:48.651 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:48.651 LIB libspdk_bdev_lvol.a 00:03:48.651 LIB libspdk_bdev_split.a 00:03:48.651 SO libspdk_bdev_lvol.so.5.0 00:03:48.651 SO libspdk_bdev_split.so.5.0 00:03:48.651 CC module/bdev/aio/bdev_aio.o 00:03:48.652 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:48.652 CC module/bdev/ftl/bdev_ftl.o 00:03:48.652 SYMLINK libspdk_bdev_lvol.so 00:03:48.652 CC module/bdev/aio/bdev_aio_rpc.o 00:03:48.652 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:48.652 SYMLINK libspdk_bdev_split.so 00:03:48.652 CC module/bdev/raid/bdev_raid_rpc.o 00:03:48.652 CC module/bdev/raid/bdev_raid_sb.o 00:03:48.652 CC module/bdev/raid/raid0.o 00:03:48.910 LIB libspdk_bdev_xnvme.a 00:03:48.910 LIB libspdk_bdev_zone_block.a 00:03:48.910 SO libspdk_bdev_zone_block.so.5.0 00:03:48.910 SO libspdk_bdev_xnvme.so.2.0 00:03:48.910 SYMLINK libspdk_bdev_xnvme.so 00:03:48.910 SYMLINK libspdk_bdev_zone_block.so 00:03:48.910 CC module/bdev/raid/raid1.o 00:03:48.910 LIB libspdk_bdev_aio.a 00:03:48.910 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:48.910 CC module/bdev/iscsi/bdev_iscsi.o 00:03:48.910 SO libspdk_bdev_aio.so.5.0 00:03:48.910 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:48.910 CC module/bdev/raid/concat.o 00:03:48.910 SYMLINK libspdk_bdev_aio.so 00:03:48.910 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:48.910 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:48.910 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:49.168 LIB libspdk_bdev_ftl.a 00:03:49.168 SO libspdk_bdev_ftl.so.5.0 00:03:49.168 SYMLINK libspdk_bdev_ftl.so 00:03:49.168 LIB libspdk_bdev_iscsi.a 00:03:49.168 LIB libspdk_bdev_raid.a 00:03:49.430 SO libspdk_bdev_iscsi.so.5.0 00:03:49.430 SO libspdk_bdev_raid.so.5.0 00:03:49.430 SYMLINK libspdk_bdev_iscsi.so 00:03:49.430 SYMLINK libspdk_bdev_raid.so 00:03:49.430 LIB libspdk_bdev_virtio.a 00:03:49.430 SO libspdk_bdev_virtio.so.5.0 00:03:49.689 SYMLINK libspdk_bdev_virtio.so 00:03:49.946 LIB libspdk_bdev_nvme.a 00:03:50.204 SO libspdk_bdev_nvme.so.6.0 00:03:50.204 SYMLINK libspdk_bdev_nvme.so 00:03:50.463 CC module/event/subsystems/iobuf/iobuf.o 00:03:50.463 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:50.463 CC module/event/subsystems/sock/sock.o 00:03:50.463 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:50.463 CC module/event/subsystems/scheduler/scheduler.o 00:03:50.463 CC module/event/subsystems/vmd/vmd.o 00:03:50.463 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:50.463 LIB libspdk_event_sock.a 00:03:50.463 LIB libspdk_event_scheduler.a 00:03:50.463 LIB libspdk_event_iobuf.a 00:03:50.724 LIB libspdk_event_vhost_blk.a 00:03:50.724 LIB libspdk_event_vmd.a 00:03:50.724 SO libspdk_event_scheduler.so.3.0 00:03:50.724 SO libspdk_event_sock.so.4.0 00:03:50.724 SO libspdk_event_iobuf.so.2.0 00:03:50.724 SO libspdk_event_vmd.so.5.0 00:03:50.724 SO libspdk_event_vhost_blk.so.2.0 00:03:50.724 SYMLINK libspdk_event_sock.so 00:03:50.724 SYMLINK libspdk_event_scheduler.so 00:03:50.724 SYMLINK libspdk_event_iobuf.so 00:03:50.724 SYMLINK libspdk_event_vhost_blk.so 00:03:50.724 SYMLINK libspdk_event_vmd.so 00:03:50.724 CC module/event/subsystems/accel/accel.o 00:03:50.986 LIB libspdk_event_accel.a 00:03:50.986 SO libspdk_event_accel.so.5.0 00:03:50.986 SYMLINK libspdk_event_accel.so 00:03:51.248 CC module/event/subsystems/bdev/bdev.o 00:03:51.248 LIB libspdk_event_bdev.a 00:03:51.248 SO libspdk_event_bdev.so.5.0 00:03:51.248 SYMLINK libspdk_event_bdev.so 00:03:51.506 CC module/event/subsystems/nbd/nbd.o 00:03:51.507 CC module/event/subsystems/ublk/ublk.o 00:03:51.507 CC module/event/subsystems/scsi/scsi.o 00:03:51.507 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:51.507 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:51.507 LIB libspdk_event_nbd.a 00:03:51.507 LIB libspdk_event_ublk.a 00:03:51.507 LIB libspdk_event_scsi.a 00:03:51.507 SO libspdk_event_nbd.so.5.0 00:03:51.507 SO libspdk_event_ublk.so.2.0 00:03:51.507 SO libspdk_event_scsi.so.5.0 00:03:51.765 LIB libspdk_event_nvmf.a 00:03:51.765 SYMLINK libspdk_event_nbd.so 00:03:51.766 SYMLINK libspdk_event_ublk.so 00:03:51.766 SYMLINK libspdk_event_scsi.so 00:03:51.766 SO libspdk_event_nvmf.so.5.0 00:03:51.766 SYMLINK libspdk_event_nvmf.so 00:03:51.766 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:51.766 CC module/event/subsystems/iscsi/iscsi.o 00:03:52.026 LIB libspdk_event_vhost_scsi.a 00:03:52.026 LIB libspdk_event_iscsi.a 00:03:52.026 SO libspdk_event_vhost_scsi.so.2.0 00:03:52.026 SO libspdk_event_iscsi.so.5.0 00:03:52.026 SYMLINK libspdk_event_vhost_scsi.so 00:03:52.026 SYMLINK libspdk_event_iscsi.so 00:03:52.026 SO libspdk.so.5.0 00:03:52.026 SYMLINK libspdk.so 00:03:52.287 CC app/trace_record/trace_record.o 00:03:52.287 CXX app/trace/trace.o 00:03:52.287 CC app/nvmf_tgt/nvmf_main.o 00:03:52.287 CC app/iscsi_tgt/iscsi_tgt.o 00:03:52.287 CC examples/ioat/perf/perf.o 00:03:52.287 CC examples/accel/perf/accel_perf.o 00:03:52.287 CC app/spdk_tgt/spdk_tgt.o 00:03:52.287 CC test/accel/dif/dif.o 00:03:52.287 CC examples/bdev/hello_world/hello_bdev.o 00:03:52.287 CC examples/blob/hello_world/hello_blob.o 00:03:52.287 LINK nvmf_tgt 00:03:52.547 LINK iscsi_tgt 00:03:52.547 LINK spdk_tgt 00:03:52.547 LINK spdk_trace_record 00:03:52.547 LINK ioat_perf 00:03:52.547 LINK hello_bdev 00:03:52.547 LINK hello_blob 00:03:52.547 LINK spdk_trace 00:03:52.547 CC examples/ioat/verify/verify.o 00:03:52.547 CC app/spdk_lspci/spdk_lspci.o 00:03:52.547 CC examples/blob/cli/blobcli.o 00:03:52.547 CC examples/bdev/bdevperf/bdevperf.o 00:03:52.808 LINK dif 00:03:52.808 LINK accel_perf 00:03:52.808 CC test/app/histogram_perf/histogram_perf.o 00:03:52.808 CC test/app/jsoncat/jsoncat.o 00:03:52.808 CC test/app/bdev_svc/bdev_svc.o 00:03:52.808 LINK spdk_lspci 00:03:52.808 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:52.808 LINK verify 00:03:52.808 LINK jsoncat 00:03:52.808 CC test/app/stub/stub.o 00:03:52.808 LINK histogram_perf 00:03:52.808 CC app/spdk_nvme_perf/perf.o 00:03:52.808 LINK bdev_svc 00:03:53.069 TEST_HEADER include/spdk/accel.h 00:03:53.069 TEST_HEADER include/spdk/accel_module.h 00:03:53.069 TEST_HEADER include/spdk/assert.h 00:03:53.069 TEST_HEADER include/spdk/barrier.h 00:03:53.069 TEST_HEADER include/spdk/base64.h 00:03:53.069 TEST_HEADER include/spdk/bdev.h 00:03:53.069 TEST_HEADER include/spdk/bdev_module.h 00:03:53.069 TEST_HEADER include/spdk/bdev_zone.h 00:03:53.069 TEST_HEADER include/spdk/bit_array.h 00:03:53.069 TEST_HEADER include/spdk/bit_pool.h 00:03:53.069 TEST_HEADER include/spdk/blob_bdev.h 00:03:53.069 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:53.069 TEST_HEADER include/spdk/blobfs.h 00:03:53.069 LINK stub 00:03:53.069 TEST_HEADER include/spdk/blob.h 00:03:53.069 CC test/bdev/bdevio/bdevio.o 00:03:53.069 TEST_HEADER include/spdk/conf.h 00:03:53.069 TEST_HEADER include/spdk/config.h 00:03:53.069 TEST_HEADER include/spdk/cpuset.h 00:03:53.069 TEST_HEADER include/spdk/crc16.h 00:03:53.069 TEST_HEADER include/spdk/crc32.h 00:03:53.069 TEST_HEADER include/spdk/crc64.h 00:03:53.069 TEST_HEADER include/spdk/dif.h 00:03:53.069 TEST_HEADER include/spdk/dma.h 00:03:53.069 TEST_HEADER include/spdk/endian.h 00:03:53.069 TEST_HEADER include/spdk/env_dpdk.h 00:03:53.069 TEST_HEADER include/spdk/env.h 00:03:53.069 TEST_HEADER include/spdk/event.h 00:03:53.069 TEST_HEADER include/spdk/fd_group.h 00:03:53.069 TEST_HEADER include/spdk/fd.h 00:03:53.069 TEST_HEADER include/spdk/file.h 00:03:53.069 TEST_HEADER include/spdk/ftl.h 00:03:53.069 TEST_HEADER include/spdk/gpt_spec.h 00:03:53.069 TEST_HEADER include/spdk/hexlify.h 00:03:53.069 TEST_HEADER include/spdk/histogram_data.h 00:03:53.069 TEST_HEADER include/spdk/idxd.h 00:03:53.069 CC test/blobfs/mkfs/mkfs.o 00:03:53.069 TEST_HEADER include/spdk/idxd_spec.h 00:03:53.069 TEST_HEADER include/spdk/init.h 00:03:53.069 TEST_HEADER include/spdk/ioat.h 00:03:53.069 TEST_HEADER include/spdk/ioat_spec.h 00:03:53.069 TEST_HEADER include/spdk/iscsi_spec.h 00:03:53.069 TEST_HEADER include/spdk/json.h 00:03:53.069 TEST_HEADER include/spdk/jsonrpc.h 00:03:53.069 TEST_HEADER include/spdk/likely.h 00:03:53.069 TEST_HEADER include/spdk/log.h 00:03:53.069 TEST_HEADER include/spdk/lvol.h 00:03:53.069 TEST_HEADER include/spdk/memory.h 00:03:53.069 TEST_HEADER include/spdk/mmio.h 00:03:53.069 TEST_HEADER include/spdk/nbd.h 00:03:53.069 TEST_HEADER include/spdk/notify.h 00:03:53.069 TEST_HEADER include/spdk/nvme.h 00:03:53.069 TEST_HEADER include/spdk/nvme_intel.h 00:03:53.069 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:53.069 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:53.069 TEST_HEADER include/spdk/nvme_spec.h 00:03:53.069 TEST_HEADER include/spdk/nvme_zns.h 00:03:53.069 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:53.069 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:53.069 TEST_HEADER include/spdk/nvmf.h 00:03:53.069 TEST_HEADER include/spdk/nvmf_spec.h 00:03:53.069 TEST_HEADER include/spdk/nvmf_transport.h 00:03:53.069 TEST_HEADER include/spdk/opal.h 00:03:53.069 TEST_HEADER include/spdk/opal_spec.h 00:03:53.069 TEST_HEADER include/spdk/pci_ids.h 00:03:53.069 TEST_HEADER include/spdk/pipe.h 00:03:53.069 TEST_HEADER include/spdk/queue.h 00:03:53.069 TEST_HEADER include/spdk/reduce.h 00:03:53.069 CC test/dma/test_dma/test_dma.o 00:03:53.069 TEST_HEADER include/spdk/rpc.h 00:03:53.069 TEST_HEADER include/spdk/scheduler.h 00:03:53.069 TEST_HEADER include/spdk/scsi.h 00:03:53.069 TEST_HEADER include/spdk/scsi_spec.h 00:03:53.069 LINK blobcli 00:03:53.069 TEST_HEADER include/spdk/sock.h 00:03:53.069 TEST_HEADER include/spdk/stdinc.h 00:03:53.069 TEST_HEADER include/spdk/string.h 00:03:53.069 TEST_HEADER include/spdk/thread.h 00:03:53.069 TEST_HEADER include/spdk/trace.h 00:03:53.069 TEST_HEADER include/spdk/trace_parser.h 00:03:53.069 TEST_HEADER include/spdk/tree.h 00:03:53.069 TEST_HEADER include/spdk/ublk.h 00:03:53.069 TEST_HEADER include/spdk/util.h 00:03:53.069 TEST_HEADER include/spdk/uuid.h 00:03:53.069 TEST_HEADER include/spdk/version.h 00:03:53.069 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:53.069 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:53.069 TEST_HEADER include/spdk/vhost.h 00:03:53.069 TEST_HEADER include/spdk/vmd.h 00:03:53.069 TEST_HEADER include/spdk/xor.h 00:03:53.069 TEST_HEADER include/spdk/zipf.h 00:03:53.069 CXX test/cpp_headers/accel.o 00:03:53.069 LINK nvme_fuzz 00:03:53.069 CC test/env/mem_callbacks/mem_callbacks.o 00:03:53.331 LINK mkfs 00:03:53.331 CC test/event/event_perf/event_perf.o 00:03:53.331 CXX test/cpp_headers/accel_module.o 00:03:53.331 CC test/event/reactor/reactor.o 00:03:53.331 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:53.331 LINK mem_callbacks 00:03:53.331 LINK event_perf 00:03:53.331 LINK bdevio 00:03:53.331 CXX test/cpp_headers/assert.o 00:03:53.331 LINK bdevperf 00:03:53.331 CC test/event/reactor_perf/reactor_perf.o 00:03:53.331 LINK reactor 00:03:53.331 LINK test_dma 00:03:53.592 CC test/env/vtophys/vtophys.o 00:03:53.592 LINK reactor_perf 00:03:53.592 LINK spdk_nvme_perf 00:03:53.592 CXX test/cpp_headers/barrier.o 00:03:53.592 CC test/lvol/esnap/esnap.o 00:03:53.592 LINK vtophys 00:03:53.592 CC examples/nvme/hello_world/hello_world.o 00:03:53.592 CC test/rpc_client/rpc_client_test.o 00:03:53.592 CC test/nvme/aer/aer.o 00:03:53.592 CC test/event/app_repeat/app_repeat.o 00:03:53.592 CC examples/nvme/reconnect/reconnect.o 00:03:53.592 CXX test/cpp_headers/base64.o 00:03:53.592 CC app/spdk_nvme_identify/identify.o 00:03:53.854 LINK app_repeat 00:03:53.854 LINK rpc_client_test 00:03:53.854 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:53.854 CXX test/cpp_headers/bdev.o 00:03:53.854 LINK hello_world 00:03:53.854 LINK aer 00:03:53.854 CXX test/cpp_headers/bdev_module.o 00:03:53.854 LINK env_dpdk_post_init 00:03:53.854 CXX test/cpp_headers/bdev_zone.o 00:03:53.854 CC test/event/scheduler/scheduler.o 00:03:54.115 LINK reconnect 00:03:54.115 CC test/nvme/reset/reset.o 00:03:54.115 CC test/thread/poller_perf/poller_perf.o 00:03:54.115 CC test/env/memory/memory_ut.o 00:03:54.115 CXX test/cpp_headers/bit_array.o 00:03:54.115 CC test/env/pci/pci_ut.o 00:03:54.115 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:54.115 LINK scheduler 00:03:54.115 LINK poller_perf 00:03:54.377 CXX test/cpp_headers/bit_pool.o 00:03:54.377 LINK reset 00:03:54.377 LINK spdk_nvme_identify 00:03:54.377 CC examples/nvme/arbitration/arbitration.o 00:03:54.377 CC examples/sock/hello_world/hello_sock.o 00:03:54.377 CXX test/cpp_headers/blob_bdev.o 00:03:54.377 CC test/nvme/sgl/sgl.o 00:03:54.377 CC app/spdk_nvme_discover/discovery_aer.o 00:03:54.637 LINK pci_ut 00:03:54.637 LINK memory_ut 00:03:54.637 CXX test/cpp_headers/blobfs_bdev.o 00:03:54.637 LINK spdk_nvme_discover 00:03:54.637 LINK nvme_manage 00:03:54.637 LINK hello_sock 00:03:54.637 LINK sgl 00:03:54.637 LINK arbitration 00:03:54.637 CXX test/cpp_headers/blobfs.o 00:03:54.637 CXX test/cpp_headers/blob.o 00:03:54.637 CC examples/nvme/hotplug/hotplug.o 00:03:54.898 CC app/spdk_top/spdk_top.o 00:03:54.898 CC examples/vmd/lsvmd/lsvmd.o 00:03:54.898 CC examples/nvmf/nvmf/nvmf.o 00:03:54.898 CXX test/cpp_headers/conf.o 00:03:54.898 CC test/nvme/e2edp/nvme_dp.o 00:03:54.898 CC test/nvme/err_injection/err_injection.o 00:03:54.898 CC test/nvme/overhead/overhead.o 00:03:54.898 LINK lsvmd 00:03:54.898 LINK hotplug 00:03:54.898 LINK iscsi_fuzz 00:03:54.898 CXX test/cpp_headers/config.o 00:03:54.898 CXX test/cpp_headers/cpuset.o 00:03:55.159 LINK nvmf 00:03:55.159 LINK err_injection 00:03:55.159 CC examples/vmd/led/led.o 00:03:55.159 LINK nvme_dp 00:03:55.159 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:55.159 CXX test/cpp_headers/crc16.o 00:03:55.159 LINK overhead 00:03:55.159 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:55.159 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:55.159 LINK led 00:03:55.159 CC examples/util/zipf/zipf.o 00:03:55.159 CXX test/cpp_headers/crc32.o 00:03:55.420 LINK cmb_copy 00:03:55.420 CC examples/nvme/abort/abort.o 00:03:55.420 CC test/nvme/startup/startup.o 00:03:55.420 CC test/nvme/reserve/reserve.o 00:03:55.420 LINK zipf 00:03:55.420 CXX test/cpp_headers/crc64.o 00:03:55.420 CC test/nvme/simple_copy/simple_copy.o 00:03:55.420 LINK spdk_top 00:03:55.420 CXX test/cpp_headers/dif.o 00:03:55.420 CC examples/thread/thread/thread_ex.o 00:03:55.420 LINK startup 00:03:55.681 LINK reserve 00:03:55.681 LINK vhost_fuzz 00:03:55.681 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:55.681 LINK simple_copy 00:03:55.681 CXX test/cpp_headers/dma.o 00:03:55.681 CC app/vhost/vhost.o 00:03:55.681 LINK abort 00:03:55.681 CC app/spdk_dd/spdk_dd.o 00:03:55.681 LINK thread 00:03:55.681 CC test/nvme/connect_stress/connect_stress.o 00:03:55.681 LINK pmr_persistence 00:03:55.681 CC app/fio/nvme/fio_plugin.o 00:03:55.681 CXX test/cpp_headers/endian.o 00:03:55.943 CC app/fio/bdev/fio_plugin.o 00:03:55.943 LINK vhost 00:03:55.943 CC test/nvme/boot_partition/boot_partition.o 00:03:55.943 LINK connect_stress 00:03:55.943 CC test/nvme/compliance/nvme_compliance.o 00:03:55.943 CXX test/cpp_headers/env_dpdk.o 00:03:55.943 CC examples/idxd/perf/perf.o 00:03:55.943 LINK boot_partition 00:03:55.943 CXX test/cpp_headers/env.o 00:03:55.943 LINK spdk_dd 00:03:55.943 CC test/nvme/fused_ordering/fused_ordering.o 00:03:55.943 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:56.205 CC test/nvme/fdp/fdp.o 00:03:56.205 CXX test/cpp_headers/event.o 00:03:56.205 LINK fused_ordering 00:03:56.205 LINK doorbell_aers 00:03:56.205 LINK nvme_compliance 00:03:56.205 CXX test/cpp_headers/fd_group.o 00:03:56.205 CC test/nvme/cuse/cuse.o 00:03:56.205 LINK spdk_nvme 00:03:56.205 LINK idxd_perf 00:03:56.205 LINK spdk_bdev 00:03:56.205 LINK fdp 00:03:56.205 CXX test/cpp_headers/fd.o 00:03:56.205 CXX test/cpp_headers/file.o 00:03:56.466 CXX test/cpp_headers/ftl.o 00:03:56.466 CXX test/cpp_headers/gpt_spec.o 00:03:56.466 CXX test/cpp_headers/hexlify.o 00:03:56.466 CXX test/cpp_headers/histogram_data.o 00:03:56.466 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:56.466 CXX test/cpp_headers/idxd.o 00:03:56.466 CXX test/cpp_headers/idxd_spec.o 00:03:56.466 CXX test/cpp_headers/init.o 00:03:56.466 CXX test/cpp_headers/ioat.o 00:03:56.466 CXX test/cpp_headers/ioat_spec.o 00:03:56.466 CXX test/cpp_headers/iscsi_spec.o 00:03:56.466 CXX test/cpp_headers/json.o 00:03:56.466 LINK interrupt_tgt 00:03:56.466 CXX test/cpp_headers/jsonrpc.o 00:03:56.466 CXX test/cpp_headers/likely.o 00:03:56.466 CXX test/cpp_headers/lvol.o 00:03:56.466 CXX test/cpp_headers/log.o 00:03:56.466 CXX test/cpp_headers/memory.o 00:03:56.727 CXX test/cpp_headers/mmio.o 00:03:56.727 CXX test/cpp_headers/nbd.o 00:03:56.727 CXX test/cpp_headers/notify.o 00:03:56.727 CXX test/cpp_headers/nvme.o 00:03:56.727 CXX test/cpp_headers/nvme_intel.o 00:03:56.727 CXX test/cpp_headers/nvme_ocssd.o 00:03:56.727 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:56.727 CXX test/cpp_headers/nvme_spec.o 00:03:56.727 CXX test/cpp_headers/nvme_zns.o 00:03:56.727 CXX test/cpp_headers/nvmf_cmd.o 00:03:56.727 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:56.727 CXX test/cpp_headers/nvmf.o 00:03:56.727 CXX test/cpp_headers/nvmf_spec.o 00:03:56.727 CXX test/cpp_headers/nvmf_transport.o 00:03:56.727 CXX test/cpp_headers/opal.o 00:03:56.727 CXX test/cpp_headers/opal_spec.o 00:03:56.727 CXX test/cpp_headers/pci_ids.o 00:03:56.986 CXX test/cpp_headers/pipe.o 00:03:56.986 CXX test/cpp_headers/queue.o 00:03:56.986 CXX test/cpp_headers/reduce.o 00:03:56.986 CXX test/cpp_headers/rpc.o 00:03:56.986 CXX test/cpp_headers/scheduler.o 00:03:56.986 CXX test/cpp_headers/scsi.o 00:03:56.986 CXX test/cpp_headers/scsi_spec.o 00:03:56.986 CXX test/cpp_headers/sock.o 00:03:56.986 CXX test/cpp_headers/stdinc.o 00:03:56.986 CXX test/cpp_headers/string.o 00:03:56.986 CXX test/cpp_headers/thread.o 00:03:56.986 CXX test/cpp_headers/trace.o 00:03:56.986 CXX test/cpp_headers/trace_parser.o 00:03:56.986 CXX test/cpp_headers/tree.o 00:03:56.986 CXX test/cpp_headers/util.o 00:03:56.986 CXX test/cpp_headers/ublk.o 00:03:57.246 CXX test/cpp_headers/uuid.o 00:03:57.246 CXX test/cpp_headers/version.o 00:03:57.246 CXX test/cpp_headers/vfio_user_pci.o 00:03:57.246 CXX test/cpp_headers/vfio_user_spec.o 00:03:57.246 CXX test/cpp_headers/vhost.o 00:03:57.246 LINK cuse 00:03:57.246 CXX test/cpp_headers/vmd.o 00:03:57.246 CXX test/cpp_headers/xor.o 00:03:57.247 CXX test/cpp_headers/zipf.o 00:03:58.189 LINK esnap 00:03:58.189 00:03:58.189 real 0m40.206s 00:03:58.189 user 3m50.453s 00:03:58.189 sys 0m44.597s 00:03:58.189 06:29:08 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:58.189 06:29:08 -- common/autotest_common.sh@10 -- $ set +x 00:03:58.189 ************************************ 00:03:58.189 END TEST make 00:03:58.189 ************************************ 00:03:58.449 06:29:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:58.449 06:29:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:58.449 06:29:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:58.449 06:29:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:58.449 06:29:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:58.449 06:29:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:58.449 06:29:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:58.449 06:29:09 -- scripts/common.sh@335 -- # IFS=.-: 00:03:58.449 06:29:09 -- scripts/common.sh@335 -- # read -ra ver1 00:03:58.449 06:29:09 -- scripts/common.sh@336 -- # IFS=.-: 00:03:58.449 06:29:09 -- scripts/common.sh@336 -- # read -ra ver2 00:03:58.449 06:29:09 -- scripts/common.sh@337 -- # local 'op=<' 00:03:58.449 06:29:09 -- scripts/common.sh@339 -- # ver1_l=2 00:03:58.449 06:29:09 -- scripts/common.sh@340 -- # ver2_l=1 00:03:58.449 06:29:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:58.449 06:29:09 -- scripts/common.sh@343 -- # case "$op" in 00:03:58.449 06:29:09 -- scripts/common.sh@344 -- # : 1 00:03:58.449 06:29:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:58.449 06:29:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:58.449 06:29:09 -- scripts/common.sh@364 -- # decimal 1 00:03:58.449 06:29:09 -- scripts/common.sh@352 -- # local d=1 00:03:58.449 06:29:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:58.449 06:29:09 -- scripts/common.sh@354 -- # echo 1 00:03:58.449 06:29:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:58.449 06:29:09 -- scripts/common.sh@365 -- # decimal 2 00:03:58.449 06:29:09 -- scripts/common.sh@352 -- # local d=2 00:03:58.449 06:29:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:58.449 06:29:09 -- scripts/common.sh@354 -- # echo 2 00:03:58.449 06:29:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:58.449 06:29:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:58.449 06:29:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:58.449 06:29:09 -- scripts/common.sh@367 -- # return 0 00:03:58.449 06:29:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:58.449 06:29:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:58.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.449 --rc genhtml_branch_coverage=1 00:03:58.449 --rc genhtml_function_coverage=1 00:03:58.449 --rc genhtml_legend=1 00:03:58.449 --rc geninfo_all_blocks=1 00:03:58.449 --rc geninfo_unexecuted_blocks=1 00:03:58.449 00:03:58.449 ' 00:03:58.449 06:29:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:58.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.449 --rc genhtml_branch_coverage=1 00:03:58.449 --rc genhtml_function_coverage=1 00:03:58.449 --rc genhtml_legend=1 00:03:58.449 --rc geninfo_all_blocks=1 00:03:58.449 --rc geninfo_unexecuted_blocks=1 00:03:58.449 00:03:58.449 ' 00:03:58.449 06:29:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:58.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.449 --rc genhtml_branch_coverage=1 00:03:58.449 --rc genhtml_function_coverage=1 00:03:58.449 --rc genhtml_legend=1 00:03:58.449 --rc geninfo_all_blocks=1 00:03:58.449 --rc geninfo_unexecuted_blocks=1 00:03:58.449 00:03:58.449 ' 00:03:58.449 06:29:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:58.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.449 --rc genhtml_branch_coverage=1 00:03:58.449 --rc genhtml_function_coverage=1 00:03:58.449 --rc genhtml_legend=1 00:03:58.449 --rc geninfo_all_blocks=1 00:03:58.449 --rc geninfo_unexecuted_blocks=1 00:03:58.449 00:03:58.449 ' 00:03:58.449 06:29:09 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:58.449 06:29:09 -- nvmf/common.sh@7 -- # uname -s 00:03:58.449 06:29:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:58.449 06:29:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:58.449 06:29:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:58.449 06:29:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:58.449 06:29:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:58.449 06:29:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:58.449 06:29:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:58.449 06:29:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:58.449 06:29:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:58.449 06:29:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:58.449 06:29:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d94a1e48-332c-4779-8460-834f9d0a8e4e 00:03:58.449 06:29:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=d94a1e48-332c-4779-8460-834f9d0a8e4e 00:03:58.449 06:29:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:58.449 06:29:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:58.449 06:29:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:58.449 06:29:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:58.449 06:29:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:58.449 06:29:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:58.449 06:29:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:58.449 06:29:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.449 06:29:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.450 06:29:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.450 06:29:09 -- paths/export.sh@5 -- # export PATH 00:03:58.450 06:29:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.450 06:29:09 -- nvmf/common.sh@46 -- # : 0 00:03:58.450 06:29:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:58.450 06:29:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:58.450 06:29:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:58.450 06:29:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:58.450 06:29:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:58.450 06:29:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:58.450 06:29:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:58.450 06:29:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:58.450 06:29:09 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:58.450 06:29:09 -- spdk/autotest.sh@32 -- # uname -s 00:03:58.450 06:29:09 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:58.450 06:29:09 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:58.450 06:29:09 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:58.450 06:29:09 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:58.450 06:29:09 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:58.450 06:29:09 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:58.450 06:29:09 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:58.450 06:29:09 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:58.450 06:29:09 -- spdk/autotest.sh@48 -- # udevadm_pid=60275 00:03:58.450 06:29:09 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:58.450 06:29:09 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:58.450 06:29:09 -- spdk/autotest.sh@54 -- # echo 60277 00:03:58.450 06:29:09 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:58.450 06:29:09 -- spdk/autotest.sh@56 -- # echo 60280 00:03:58.450 06:29:09 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:58.450 06:29:09 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:58.450 06:29:09 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:58.450 06:29:09 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:58.450 06:29:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:58.450 06:29:09 -- common/autotest_common.sh@10 -- # set +x 00:03:58.450 06:29:09 -- spdk/autotest.sh@70 -- # create_test_list 00:03:58.450 06:29:09 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:58.450 06:29:09 -- common/autotest_common.sh@10 -- # set +x 00:03:58.450 06:29:09 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:58.450 06:29:09 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:58.450 06:29:09 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:58.450 06:29:09 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:58.450 06:29:09 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:58.450 06:29:09 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:58.450 06:29:09 -- common/autotest_common.sh@1450 -- # uname 00:03:58.450 06:29:09 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:03:58.450 06:29:09 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:58.711 06:29:09 -- common/autotest_common.sh@1470 -- # uname 00:03:58.711 06:29:09 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:03:58.711 06:29:09 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:03:58.711 06:29:09 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:58.711 lcov: LCOV version 1.15 00:03:58.711 06:29:09 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:06.841 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:06.841 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:06.841 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:06.841 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:06.841 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:06.841 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:24.966 06:29:35 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:04:24.966 06:29:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:24.966 06:29:35 -- common/autotest_common.sh@10 -- # set +x 00:04:24.966 06:29:35 -- spdk/autotest.sh@89 -- # rm -f 00:04:24.966 06:29:35 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:25.224 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:25.483 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:04:25.483 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:04:25.483 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:25.483 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:04:25.483 06:29:36 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:04:25.483 06:29:36 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:25.483 06:29:36 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:25.483 06:29:36 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:25.483 06:29:36 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:25.483 06:29:36 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:25.483 06:29:36 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:25.483 06:29:36 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:25.483 06:29:36 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:25.483 06:29:36 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:25.483 06:29:36 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:25.483 06:29:36 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:25.483 06:29:36 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:25.483 06:29:36 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:25.483 06:29:36 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:25.483 06:29:36 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:25.483 06:29:36 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:25.483 06:29:36 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:25.483 06:29:36 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:25.483 06:29:36 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:25.483 06:29:36 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:25.483 06:29:36 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:25.483 06:29:36 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:25.483 06:29:36 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:25.483 06:29:36 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:25.483 06:29:36 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:04:25.483 06:29:36 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:04:25.483 06:29:36 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:25.483 06:29:36 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:25.483 06:29:36 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:25.483 06:29:36 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3c3n1 00:04:25.483 06:29:36 -- common/autotest_common.sh@1657 -- # local device=nvme3c3n1 00:04:25.483 06:29:36 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:25.483 06:29:36 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:25.483 06:29:36 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:25.483 06:29:36 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:04:25.483 06:29:36 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:04:25.483 06:29:36 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:25.483 06:29:36 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:25.483 06:29:36 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:04:25.483 06:29:36 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 /dev/nvme2n1 /dev/nvme3n1 00:04:25.483 06:29:36 -- spdk/autotest.sh@108 -- # grep -v p 00:04:25.483 06:29:36 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:25.483 06:29:36 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:25.483 06:29:36 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:04:25.483 06:29:36 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:25.483 06:29:36 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:25.483 No valid GPT data, bailing 00:04:25.483 06:29:36 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:25.483 06:29:36 -- scripts/common.sh@393 -- # pt= 00:04:25.483 06:29:36 -- scripts/common.sh@394 -- # return 1 00:04:25.483 06:29:36 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:25.483 1+0 records in 00:04:25.483 1+0 records out 00:04:25.483 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00941911 s, 111 MB/s 00:04:25.483 06:29:36 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:25.483 06:29:36 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:25.483 06:29:36 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:04:25.483 06:29:36 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:04:25.483 06:29:36 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:25.483 No valid GPT data, bailing 00:04:25.483 06:29:36 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:25.741 06:29:36 -- scripts/common.sh@393 -- # pt= 00:04:25.741 06:29:36 -- scripts/common.sh@394 -- # return 1 00:04:25.741 06:29:36 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:25.741 1+0 records in 00:04:25.741 1+0 records out 00:04:25.741 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00415787 s, 252 MB/s 00:04:25.741 06:29:36 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:25.741 06:29:36 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:25.741 06:29:36 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:04:25.741 06:29:36 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:04:25.742 06:29:36 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:25.742 No valid GPT data, bailing 00:04:25.742 06:29:36 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:25.742 06:29:36 -- scripts/common.sh@393 -- # pt= 00:04:25.742 06:29:36 -- scripts/common.sh@394 -- # return 1 00:04:25.742 06:29:36 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:25.742 1+0 records in 00:04:25.742 1+0 records out 00:04:25.742 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00319334 s, 328 MB/s 00:04:25.742 06:29:36 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:25.742 06:29:36 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:25.742 06:29:36 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:04:25.742 06:29:36 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:04:25.742 06:29:36 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:25.742 No valid GPT data, bailing 00:04:25.742 06:29:36 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:25.742 06:29:36 -- scripts/common.sh@393 -- # pt= 00:04:25.742 06:29:36 -- scripts/common.sh@394 -- # return 1 00:04:25.742 06:29:36 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:25.742 1+0 records in 00:04:25.742 1+0 records out 00:04:25.742 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00281926 s, 372 MB/s 00:04:25.742 06:29:36 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:25.742 06:29:36 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:25.742 06:29:36 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n1 00:04:25.742 06:29:36 -- scripts/common.sh@380 -- # local block=/dev/nvme2n1 pt 00:04:25.742 06:29:36 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:25.742 No valid GPT data, bailing 00:04:25.742 06:29:36 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:25.742 06:29:36 -- scripts/common.sh@393 -- # pt= 00:04:25.742 06:29:36 -- scripts/common.sh@394 -- # return 1 00:04:25.742 06:29:36 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:25.742 1+0 records in 00:04:25.742 1+0 records out 00:04:25.742 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00487236 s, 215 MB/s 00:04:25.742 06:29:36 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:25.742 06:29:36 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:25.742 06:29:36 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme3n1 00:04:25.742 06:29:36 -- scripts/common.sh@380 -- # local block=/dev/nvme3n1 pt 00:04:25.742 06:29:36 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:25.742 No valid GPT data, bailing 00:04:25.742 06:29:36 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:25.742 06:29:36 -- scripts/common.sh@393 -- # pt= 00:04:25.742 06:29:36 -- scripts/common.sh@394 -- # return 1 00:04:25.742 06:29:36 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:25.742 1+0 records in 00:04:25.742 1+0 records out 00:04:25.742 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00324896 s, 323 MB/s 00:04:25.742 06:29:36 -- spdk/autotest.sh@116 -- # sync 00:04:26.307 06:29:36 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:26.308 06:29:36 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:26.308 06:29:36 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:27.682 06:29:38 -- spdk/autotest.sh@122 -- # uname -s 00:04:27.682 06:29:38 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:04:27.682 06:29:38 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:27.682 06:29:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:27.682 06:29:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:27.682 06:29:38 -- common/autotest_common.sh@10 -- # set +x 00:04:27.682 ************************************ 00:04:27.682 START TEST setup.sh 00:04:27.682 ************************************ 00:04:27.682 06:29:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:27.682 * Looking for test storage... 00:04:27.682 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:27.682 06:29:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:27.682 06:29:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:27.682 06:29:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:27.942 06:29:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:27.942 06:29:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:27.942 06:29:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:27.942 06:29:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:27.942 06:29:38 -- scripts/common.sh@335 -- # IFS=.-: 00:04:27.942 06:29:38 -- scripts/common.sh@335 -- # read -ra ver1 00:04:27.942 06:29:38 -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.942 06:29:38 -- scripts/common.sh@336 -- # read -ra ver2 00:04:27.942 06:29:38 -- scripts/common.sh@337 -- # local 'op=<' 00:04:27.942 06:29:38 -- scripts/common.sh@339 -- # ver1_l=2 00:04:27.942 06:29:38 -- scripts/common.sh@340 -- # ver2_l=1 00:04:27.942 06:29:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:27.942 06:29:38 -- scripts/common.sh@343 -- # case "$op" in 00:04:27.942 06:29:38 -- scripts/common.sh@344 -- # : 1 00:04:27.942 06:29:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:27.942 06:29:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.942 06:29:38 -- scripts/common.sh@364 -- # decimal 1 00:04:27.942 06:29:38 -- scripts/common.sh@352 -- # local d=1 00:04:27.942 06:29:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.942 06:29:38 -- scripts/common.sh@354 -- # echo 1 00:04:27.942 06:29:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:27.942 06:29:38 -- scripts/common.sh@365 -- # decimal 2 00:04:27.942 06:29:38 -- scripts/common.sh@352 -- # local d=2 00:04:27.942 06:29:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.942 06:29:38 -- scripts/common.sh@354 -- # echo 2 00:04:27.942 06:29:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:27.942 06:29:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:27.942 06:29:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:27.942 06:29:38 -- scripts/common.sh@367 -- # return 0 00:04:27.942 06:29:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.942 06:29:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:27.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.942 --rc genhtml_branch_coverage=1 00:04:27.942 --rc genhtml_function_coverage=1 00:04:27.942 --rc genhtml_legend=1 00:04:27.942 --rc geninfo_all_blocks=1 00:04:27.942 --rc geninfo_unexecuted_blocks=1 00:04:27.942 00:04:27.942 ' 00:04:27.942 06:29:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:27.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.942 --rc genhtml_branch_coverage=1 00:04:27.942 --rc genhtml_function_coverage=1 00:04:27.942 --rc genhtml_legend=1 00:04:27.942 --rc geninfo_all_blocks=1 00:04:27.942 --rc geninfo_unexecuted_blocks=1 00:04:27.942 00:04:27.942 ' 00:04:27.942 06:29:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:27.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.942 --rc genhtml_branch_coverage=1 00:04:27.942 --rc genhtml_function_coverage=1 00:04:27.942 --rc genhtml_legend=1 00:04:27.942 --rc geninfo_all_blocks=1 00:04:27.942 --rc geninfo_unexecuted_blocks=1 00:04:27.942 00:04:27.942 ' 00:04:27.942 06:29:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:27.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.942 --rc genhtml_branch_coverage=1 00:04:27.942 --rc genhtml_function_coverage=1 00:04:27.942 --rc genhtml_legend=1 00:04:27.942 --rc geninfo_all_blocks=1 00:04:27.942 --rc geninfo_unexecuted_blocks=1 00:04:27.942 00:04:27.942 ' 00:04:27.942 06:29:38 -- setup/test-setup.sh@10 -- # uname -s 00:04:27.942 06:29:38 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:27.942 06:29:38 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:27.942 06:29:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:27.942 06:29:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:27.942 06:29:38 -- common/autotest_common.sh@10 -- # set +x 00:04:27.942 ************************************ 00:04:27.942 START TEST acl 00:04:27.942 ************************************ 00:04:27.942 06:29:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:27.942 * Looking for test storage... 00:04:27.942 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:27.942 06:29:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:27.942 06:29:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:27.942 06:29:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:27.942 06:29:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:27.942 06:29:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:27.942 06:29:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:27.942 06:29:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:27.942 06:29:38 -- scripts/common.sh@335 -- # IFS=.-: 00:04:27.942 06:29:38 -- scripts/common.sh@335 -- # read -ra ver1 00:04:27.942 06:29:38 -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.942 06:29:38 -- scripts/common.sh@336 -- # read -ra ver2 00:04:27.942 06:29:38 -- scripts/common.sh@337 -- # local 'op=<' 00:04:27.942 06:29:38 -- scripts/common.sh@339 -- # ver1_l=2 00:04:27.942 06:29:38 -- scripts/common.sh@340 -- # ver2_l=1 00:04:27.942 06:29:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:27.942 06:29:38 -- scripts/common.sh@343 -- # case "$op" in 00:04:27.942 06:29:38 -- scripts/common.sh@344 -- # : 1 00:04:27.942 06:29:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:27.942 06:29:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.942 06:29:38 -- scripts/common.sh@364 -- # decimal 1 00:04:27.942 06:29:38 -- scripts/common.sh@352 -- # local d=1 00:04:27.942 06:29:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.942 06:29:38 -- scripts/common.sh@354 -- # echo 1 00:04:27.942 06:29:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:27.942 06:29:38 -- scripts/common.sh@365 -- # decimal 2 00:04:27.942 06:29:38 -- scripts/common.sh@352 -- # local d=2 00:04:27.942 06:29:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.942 06:29:38 -- scripts/common.sh@354 -- # echo 2 00:04:27.942 06:29:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:27.942 06:29:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:27.942 06:29:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:27.942 06:29:38 -- scripts/common.sh@367 -- # return 0 00:04:27.942 06:29:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.942 06:29:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:27.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.942 --rc genhtml_branch_coverage=1 00:04:27.942 --rc genhtml_function_coverage=1 00:04:27.942 --rc genhtml_legend=1 00:04:27.942 --rc geninfo_all_blocks=1 00:04:27.942 --rc geninfo_unexecuted_blocks=1 00:04:27.942 00:04:27.942 ' 00:04:27.942 06:29:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:27.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.942 --rc genhtml_branch_coverage=1 00:04:27.942 --rc genhtml_function_coverage=1 00:04:27.942 --rc genhtml_legend=1 00:04:27.942 --rc geninfo_all_blocks=1 00:04:27.942 --rc geninfo_unexecuted_blocks=1 00:04:27.942 00:04:27.942 ' 00:04:27.942 06:29:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:27.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.942 --rc genhtml_branch_coverage=1 00:04:27.942 --rc genhtml_function_coverage=1 00:04:27.942 --rc genhtml_legend=1 00:04:27.942 --rc geninfo_all_blocks=1 00:04:27.942 --rc geninfo_unexecuted_blocks=1 00:04:27.942 00:04:27.942 ' 00:04:27.942 06:29:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:27.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.942 --rc genhtml_branch_coverage=1 00:04:27.942 --rc genhtml_function_coverage=1 00:04:27.942 --rc genhtml_legend=1 00:04:27.942 --rc geninfo_all_blocks=1 00:04:27.942 --rc geninfo_unexecuted_blocks=1 00:04:27.942 00:04:27.942 ' 00:04:27.942 06:29:38 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:27.942 06:29:38 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:27.942 06:29:38 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:27.942 06:29:38 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:27.942 06:29:38 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:27.942 06:29:38 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:27.942 06:29:38 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:27.942 06:29:38 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:27.942 06:29:38 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:27.942 06:29:38 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:27.942 06:29:38 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:27.942 06:29:38 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:27.942 06:29:38 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:27.942 06:29:38 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:27.942 06:29:38 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:27.942 06:29:38 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:27.942 06:29:38 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:27.942 06:29:38 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:27.942 06:29:38 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:27.942 06:29:38 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:27.942 06:29:38 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:27.942 06:29:38 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:27.942 06:29:38 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:27.942 06:29:38 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:27.942 06:29:38 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:27.942 06:29:38 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:04:27.942 06:29:38 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:04:27.942 06:29:38 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:27.942 06:29:38 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:27.942 06:29:38 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:27.942 06:29:38 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3c3n1 00:04:27.942 06:29:38 -- common/autotest_common.sh@1657 -- # local device=nvme3c3n1 00:04:27.942 06:29:38 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:27.942 06:29:38 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:27.942 06:29:38 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:27.942 06:29:38 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:04:27.942 06:29:38 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:04:27.942 06:29:38 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:27.942 06:29:38 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:27.942 06:29:38 -- setup/acl.sh@12 -- # devs=() 00:04:27.943 06:29:38 -- setup/acl.sh@12 -- # declare -a devs 00:04:27.943 06:29:38 -- setup/acl.sh@13 -- # drivers=() 00:04:27.943 06:29:38 -- setup/acl.sh@13 -- # declare -A drivers 00:04:27.943 06:29:38 -- setup/acl.sh@51 -- # setup reset 00:04:27.943 06:29:38 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:27.943 06:29:38 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:28.882 06:29:39 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:28.882 06:29:39 -- setup/acl.sh@16 -- # local dev driver 00:04:28.882 06:29:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:28.882 06:29:39 -- setup/acl.sh@15 -- # setup output status 00:04:28.882 06:29:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.882 06:29:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:29.141 Hugepages 00:04:29.141 node hugesize free / total 00:04:29.141 06:29:39 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:29.141 06:29:39 -- setup/acl.sh@19 -- # continue 00:04:29.141 06:29:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.141 00:04:29.141 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:29.141 06:29:39 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:29.141 06:29:39 -- setup/acl.sh@19 -- # continue 00:04:29.141 06:29:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.141 06:29:39 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:29.141 06:29:39 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:29.141 06:29:39 -- setup/acl.sh@20 -- # continue 00:04:29.141 06:29:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.141 06:29:39 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:29.141 06:29:39 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:29.141 06:29:39 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:29.141 06:29:39 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:29.141 06:29:39 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:29.141 06:29:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.141 06:29:39 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:04:29.141 06:29:39 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:29.141 06:29:39 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:29.141 06:29:39 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:29.141 06:29:39 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:29.141 06:29:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.141 06:29:39 -- setup/acl.sh@19 -- # [[ 0000:00:08.0 == *:*:*.* ]] 00:04:29.141 06:29:39 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:29.141 06:29:39 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:29.141 06:29:39 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:29.141 06:29:39 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:29.141 06:29:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.401 06:29:39 -- setup/acl.sh@19 -- # [[ 0000:00:09.0 == *:*:*.* ]] 00:04:29.401 06:29:39 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:29.401 06:29:39 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:04:29.401 06:29:39 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:29.401 06:29:39 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:29.401 06:29:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.401 06:29:39 -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:04:29.401 06:29:39 -- setup/acl.sh@54 -- # run_test denied denied 00:04:29.401 06:29:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:29.401 06:29:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:29.401 06:29:39 -- common/autotest_common.sh@10 -- # set +x 00:04:29.401 ************************************ 00:04:29.401 START TEST denied 00:04:29.401 ************************************ 00:04:29.401 06:29:39 -- common/autotest_common.sh@1114 -- # denied 00:04:29.401 06:29:39 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:29.401 06:29:39 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:29.401 06:29:39 -- setup/acl.sh@38 -- # setup output config 00:04:29.401 06:29:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.401 06:29:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:30.338 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:30.338 06:29:40 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:30.338 06:29:40 -- setup/acl.sh@28 -- # local dev driver 00:04:30.338 06:29:40 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:30.338 06:29:40 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:30.338 06:29:40 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:30.338 06:29:40 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:30.338 06:29:40 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:30.338 06:29:40 -- setup/acl.sh@41 -- # setup reset 00:04:30.338 06:29:40 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:30.338 06:29:40 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:36.923 ************************************ 00:04:36.923 END TEST denied 00:04:36.923 ************************************ 00:04:36.923 00:04:36.923 real 0m6.760s 00:04:36.923 user 0m0.631s 00:04:36.923 sys 0m1.112s 00:04:36.923 06:29:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:36.923 06:29:46 -- common/autotest_common.sh@10 -- # set +x 00:04:36.923 06:29:46 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:36.923 06:29:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:36.923 06:29:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:36.923 06:29:46 -- common/autotest_common.sh@10 -- # set +x 00:04:36.923 ************************************ 00:04:36.923 START TEST allowed 00:04:36.923 ************************************ 00:04:36.923 06:29:46 -- common/autotest_common.sh@1114 -- # allowed 00:04:36.923 06:29:46 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:36.923 06:29:46 -- setup/acl.sh@45 -- # setup output config 00:04:36.923 06:29:46 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:36.923 06:29:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.924 06:29:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:37.186 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:37.186 06:29:47 -- setup/acl.sh@47 -- # verify 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:37.186 06:29:47 -- setup/acl.sh@28 -- # local dev driver 00:04:37.186 06:29:47 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:37.186 06:29:47 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:37.186 06:29:47 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:37.186 06:29:47 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:37.186 06:29:47 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:37.186 06:29:47 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:37.186 06:29:47 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:08.0 ]] 00:04:37.186 06:29:47 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:08.0/driver 00:04:37.186 06:29:47 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:37.186 06:29:47 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:37.186 06:29:47 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:37.186 06:29:47 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:09.0 ]] 00:04:37.186 06:29:47 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:09.0/driver 00:04:37.186 06:29:47 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:37.186 06:29:47 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:37.186 06:29:47 -- setup/acl.sh@48 -- # setup reset 00:04:37.186 06:29:47 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:37.186 06:29:47 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:38.130 00:04:38.130 real 0m2.001s 00:04:38.130 user 0m0.801s 00:04:38.130 sys 0m1.054s 00:04:38.130 06:29:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:38.130 06:29:48 -- common/autotest_common.sh@10 -- # set +x 00:04:38.130 ************************************ 00:04:38.130 END TEST allowed 00:04:38.130 ************************************ 00:04:38.130 ************************************ 00:04:38.130 END TEST acl 00:04:38.130 ************************************ 00:04:38.130 00:04:38.130 real 0m10.330s 00:04:38.130 user 0m2.083s 00:04:38.130 sys 0m3.068s 00:04:38.130 06:29:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:38.130 06:29:48 -- common/autotest_common.sh@10 -- # set +x 00:04:38.130 06:29:48 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:38.130 06:29:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:38.130 06:29:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:38.130 06:29:48 -- common/autotest_common.sh@10 -- # set +x 00:04:38.392 ************************************ 00:04:38.392 START TEST hugepages 00:04:38.392 ************************************ 00:04:38.392 06:29:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:38.392 * Looking for test storage... 00:04:38.392 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:38.392 06:29:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:38.392 06:29:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:38.392 06:29:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:38.392 06:29:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:38.392 06:29:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:38.392 06:29:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:38.392 06:29:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:38.392 06:29:49 -- scripts/common.sh@335 -- # IFS=.-: 00:04:38.392 06:29:49 -- scripts/common.sh@335 -- # read -ra ver1 00:04:38.392 06:29:49 -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.392 06:29:49 -- scripts/common.sh@336 -- # read -ra ver2 00:04:38.392 06:29:49 -- scripts/common.sh@337 -- # local 'op=<' 00:04:38.392 06:29:49 -- scripts/common.sh@339 -- # ver1_l=2 00:04:38.392 06:29:49 -- scripts/common.sh@340 -- # ver2_l=1 00:04:38.392 06:29:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:38.392 06:29:49 -- scripts/common.sh@343 -- # case "$op" in 00:04:38.392 06:29:49 -- scripts/common.sh@344 -- # : 1 00:04:38.392 06:29:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:38.392 06:29:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.392 06:29:49 -- scripts/common.sh@364 -- # decimal 1 00:04:38.392 06:29:49 -- scripts/common.sh@352 -- # local d=1 00:04:38.392 06:29:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.392 06:29:49 -- scripts/common.sh@354 -- # echo 1 00:04:38.392 06:29:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:38.392 06:29:49 -- scripts/common.sh@365 -- # decimal 2 00:04:38.392 06:29:49 -- scripts/common.sh@352 -- # local d=2 00:04:38.392 06:29:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.392 06:29:49 -- scripts/common.sh@354 -- # echo 2 00:04:38.392 06:29:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:38.392 06:29:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:38.392 06:29:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:38.392 06:29:49 -- scripts/common.sh@367 -- # return 0 00:04:38.392 06:29:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.392 06:29:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:38.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.392 --rc genhtml_branch_coverage=1 00:04:38.392 --rc genhtml_function_coverage=1 00:04:38.392 --rc genhtml_legend=1 00:04:38.392 --rc geninfo_all_blocks=1 00:04:38.392 --rc geninfo_unexecuted_blocks=1 00:04:38.392 00:04:38.392 ' 00:04:38.392 06:29:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:38.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.392 --rc genhtml_branch_coverage=1 00:04:38.392 --rc genhtml_function_coverage=1 00:04:38.392 --rc genhtml_legend=1 00:04:38.392 --rc geninfo_all_blocks=1 00:04:38.392 --rc geninfo_unexecuted_blocks=1 00:04:38.392 00:04:38.392 ' 00:04:38.392 06:29:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:38.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.392 --rc genhtml_branch_coverage=1 00:04:38.392 --rc genhtml_function_coverage=1 00:04:38.392 --rc genhtml_legend=1 00:04:38.392 --rc geninfo_all_blocks=1 00:04:38.392 --rc geninfo_unexecuted_blocks=1 00:04:38.392 00:04:38.392 ' 00:04:38.392 06:29:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:38.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.392 --rc genhtml_branch_coverage=1 00:04:38.392 --rc genhtml_function_coverage=1 00:04:38.392 --rc genhtml_legend=1 00:04:38.392 --rc geninfo_all_blocks=1 00:04:38.392 --rc geninfo_unexecuted_blocks=1 00:04:38.392 00:04:38.392 ' 00:04:38.392 06:29:49 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:38.392 06:29:49 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:38.392 06:29:49 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:38.392 06:29:49 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:38.392 06:29:49 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:38.392 06:29:49 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:38.392 06:29:49 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:38.392 06:29:49 -- setup/common.sh@18 -- # local node= 00:04:38.392 06:29:49 -- setup/common.sh@19 -- # local var val 00:04:38.392 06:29:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:38.392 06:29:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.392 06:29:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.392 06:29:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.393 06:29:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.393 06:29:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 4675352 kB' 'MemAvailable: 7351780 kB' 'Buffers: 3704 kB' 'Cached: 2879276 kB' 'SwapCached: 0 kB' 'Active: 465260 kB' 'Inactive: 2533208 kB' 'Active(anon): 126024 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533208 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 117236 kB' 'Mapped: 51044 kB' 'Shmem: 10536 kB' 'KReclaimable: 82272 kB' 'Slab: 186716 kB' 'SReclaimable: 82272 kB' 'SUnreclaim: 104444 kB' 'KernelStack: 6688 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12409996 kB' 'Committed_AS: 310104 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55640 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.393 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.393 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 06:29:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.394 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.394 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 06:29:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.394 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.394 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 06:29:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.394 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.394 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 06:29:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.394 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.394 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 06:29:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.394 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.394 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 06:29:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.394 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.394 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 06:29:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.394 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.394 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 06:29:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.394 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.394 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 06:29:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.394 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.394 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 06:29:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.394 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.394 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 06:29:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.394 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.394 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 06:29:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.394 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.394 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 06:29:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.394 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.394 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 06:29:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.394 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.394 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 06:29:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.394 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.394 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 06:29:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.394 06:29:49 -- setup/common.sh@32 -- # continue 00:04:38.394 06:29:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.394 06:29:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.394 06:29:49 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.394 06:29:49 -- setup/common.sh@33 -- # echo 2048 00:04:38.394 06:29:49 -- setup/common.sh@33 -- # return 0 00:04:38.394 06:29:49 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:38.394 06:29:49 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:38.394 06:29:49 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:38.394 06:29:49 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:38.394 06:29:49 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:38.394 06:29:49 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:38.394 06:29:49 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:38.394 06:29:49 -- setup/hugepages.sh@207 -- # get_nodes 00:04:38.394 06:29:49 -- setup/hugepages.sh@27 -- # local node 00:04:38.394 06:29:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.394 06:29:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:38.394 06:29:49 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:38.394 06:29:49 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:38.394 06:29:49 -- setup/hugepages.sh@208 -- # clear_hp 00:04:38.394 06:29:49 -- setup/hugepages.sh@37 -- # local node hp 00:04:38.394 06:29:49 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:38.394 06:29:49 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:38.394 06:29:49 -- setup/hugepages.sh@41 -- # echo 0 00:04:38.394 06:29:49 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:38.394 06:29:49 -- setup/hugepages.sh@41 -- # echo 0 00:04:38.394 06:29:49 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:38.394 06:29:49 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:38.394 06:29:49 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:38.394 06:29:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:38.394 06:29:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:38.394 06:29:49 -- common/autotest_common.sh@10 -- # set +x 00:04:38.394 ************************************ 00:04:38.394 START TEST default_setup 00:04:38.394 ************************************ 00:04:38.394 06:29:49 -- common/autotest_common.sh@1114 -- # default_setup 00:04:38.394 06:29:49 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:38.394 06:29:49 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:38.394 06:29:49 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:38.394 06:29:49 -- setup/hugepages.sh@51 -- # shift 00:04:38.394 06:29:49 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:38.394 06:29:49 -- setup/hugepages.sh@52 -- # local node_ids 00:04:38.394 06:29:49 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:38.394 06:29:49 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:38.394 06:29:49 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:38.394 06:29:49 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:38.394 06:29:49 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:38.394 06:29:49 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:38.394 06:29:49 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:38.394 06:29:49 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:38.394 06:29:49 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:38.394 06:29:49 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:38.394 06:29:49 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:38.394 06:29:49 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:38.394 06:29:49 -- setup/hugepages.sh@73 -- # return 0 00:04:38.394 06:29:49 -- setup/hugepages.sh@137 -- # setup output 00:04:38.394 06:29:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.394 06:29:49 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:39.337 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:39.337 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:39.337 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:39.337 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:39.602 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:39.602 06:29:50 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:39.602 06:29:50 -- setup/hugepages.sh@89 -- # local node 00:04:39.602 06:29:50 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:39.602 06:29:50 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:39.602 06:29:50 -- setup/hugepages.sh@92 -- # local surp 00:04:39.602 06:29:50 -- setup/hugepages.sh@93 -- # local resv 00:04:39.602 06:29:50 -- setup/hugepages.sh@94 -- # local anon 00:04:39.602 06:29:50 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:39.602 06:29:50 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:39.602 06:29:50 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:39.602 06:29:50 -- setup/common.sh@18 -- # local node= 00:04:39.602 06:29:50 -- setup/common.sh@19 -- # local var val 00:04:39.602 06:29:50 -- setup/common.sh@20 -- # local mem_f mem 00:04:39.602 06:29:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.602 06:29:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.602 06:29:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.602 06:29:50 -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.602 06:29:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.602 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.602 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.602 06:29:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6792912 kB' 'MemAvailable: 9469116 kB' 'Buffers: 3704 kB' 'Cached: 2879264 kB' 'SwapCached: 0 kB' 'Active: 467228 kB' 'Inactive: 2533216 kB' 'Active(anon): 127992 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533216 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118944 kB' 'Mapped: 50788 kB' 'Shmem: 10496 kB' 'KReclaimable: 81808 kB' 'Slab: 186300 kB' 'SReclaimable: 81808 kB' 'SUnreclaim: 104492 kB' 'KernelStack: 6656 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 317792 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55704 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:39.602 06:29:50 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.602 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.602 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.602 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.602 06:29:50 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.602 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.602 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.602 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.602 06:29:50 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.602 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.602 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.602 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.602 06:29:50 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.602 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.602 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.602 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.602 06:29:50 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.602 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.602 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.602 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.602 06:29:50 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.602 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.602 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.602 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.602 06:29:50 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.602 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.602 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.602 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.602 06:29:50 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.602 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.602 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.602 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.602 06:29:50 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.602 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.602 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.602 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.602 06:29:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.602 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.602 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.602 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.602 06:29:50 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.602 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.602 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.602 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.602 06:29:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.602 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.602 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.603 06:29:50 -- setup/common.sh@33 -- # echo 0 00:04:39.603 06:29:50 -- setup/common.sh@33 -- # return 0 00:04:39.603 06:29:50 -- setup/hugepages.sh@97 -- # anon=0 00:04:39.603 06:29:50 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:39.603 06:29:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.603 06:29:50 -- setup/common.sh@18 -- # local node= 00:04:39.603 06:29:50 -- setup/common.sh@19 -- # local var val 00:04:39.603 06:29:50 -- setup/common.sh@20 -- # local mem_f mem 00:04:39.603 06:29:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.603 06:29:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.603 06:29:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.603 06:29:50 -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.603 06:29:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.603 06:29:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6792912 kB' 'MemAvailable: 9469124 kB' 'Buffers: 3704 kB' 'Cached: 2879264 kB' 'SwapCached: 0 kB' 'Active: 467172 kB' 'Inactive: 2533224 kB' 'Active(anon): 127936 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 119096 kB' 'Mapped: 51100 kB' 'Shmem: 10496 kB' 'KReclaimable: 81808 kB' 'Slab: 186300 kB' 'SReclaimable: 81808 kB' 'SUnreclaim: 104492 kB' 'KernelStack: 6540 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 317192 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.603 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.603 06:29:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.604 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.604 06:29:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.605 06:29:50 -- setup/common.sh@33 -- # echo 0 00:04:39.605 06:29:50 -- setup/common.sh@33 -- # return 0 00:04:39.605 06:29:50 -- setup/hugepages.sh@99 -- # surp=0 00:04:39.605 06:29:50 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:39.605 06:29:50 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:39.605 06:29:50 -- setup/common.sh@18 -- # local node= 00:04:39.605 06:29:50 -- setup/common.sh@19 -- # local var val 00:04:39.605 06:29:50 -- setup/common.sh@20 -- # local mem_f mem 00:04:39.605 06:29:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.605 06:29:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.605 06:29:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.605 06:29:50 -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.605 06:29:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.605 06:29:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6792912 kB' 'MemAvailable: 9469136 kB' 'Buffers: 3704 kB' 'Cached: 2879264 kB' 'SwapCached: 0 kB' 'Active: 466860 kB' 'Inactive: 2533236 kB' 'Active(anon): 127624 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118736 kB' 'Mapped: 50752 kB' 'Shmem: 10496 kB' 'KReclaimable: 81808 kB' 'Slab: 186292 kB' 'SReclaimable: 81808 kB' 'SUnreclaim: 104484 kB' 'KernelStack: 6528 kB' 'PageTables: 3744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 315112 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55640 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.605 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.605 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.606 06:29:50 -- setup/common.sh@33 -- # echo 0 00:04:39.606 06:29:50 -- setup/common.sh@33 -- # return 0 00:04:39.606 06:29:50 -- setup/hugepages.sh@100 -- # resv=0 00:04:39.606 nr_hugepages=1024 00:04:39.606 resv_hugepages=0 00:04:39.606 surplus_hugepages=0 00:04:39.606 anon_hugepages=0 00:04:39.606 06:29:50 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:39.606 06:29:50 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:39.606 06:29:50 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:39.606 06:29:50 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:39.606 06:29:50 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.606 06:29:50 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:39.606 06:29:50 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:39.606 06:29:50 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:39.606 06:29:50 -- setup/common.sh@18 -- # local node= 00:04:39.606 06:29:50 -- setup/common.sh@19 -- # local var val 00:04:39.606 06:29:50 -- setup/common.sh@20 -- # local mem_f mem 00:04:39.606 06:29:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.606 06:29:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.606 06:29:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.606 06:29:50 -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.606 06:29:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.606 06:29:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6793432 kB' 'MemAvailable: 9469656 kB' 'Buffers: 3704 kB' 'Cached: 2879264 kB' 'SwapCached: 0 kB' 'Active: 466792 kB' 'Inactive: 2533236 kB' 'Active(anon): 127556 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118700 kB' 'Mapped: 50752 kB' 'Shmem: 10496 kB' 'KReclaimable: 81808 kB' 'Slab: 186300 kB' 'SReclaimable: 81808 kB' 'SUnreclaim: 104492 kB' 'KernelStack: 6592 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 315112 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55656 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.606 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.606 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.607 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.607 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.608 06:29:50 -- setup/common.sh@33 -- # echo 1024 00:04:39.608 06:29:50 -- setup/common.sh@33 -- # return 0 00:04:39.608 06:29:50 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.608 06:29:50 -- setup/hugepages.sh@112 -- # get_nodes 00:04:39.608 06:29:50 -- setup/hugepages.sh@27 -- # local node 00:04:39.608 06:29:50 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.608 06:29:50 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:39.608 06:29:50 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:39.608 06:29:50 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:39.608 06:29:50 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.608 06:29:50 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.608 06:29:50 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:39.608 06:29:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.608 06:29:50 -- setup/common.sh@18 -- # local node=0 00:04:39.608 06:29:50 -- setup/common.sh@19 -- # local var val 00:04:39.608 06:29:50 -- setup/common.sh@20 -- # local mem_f mem 00:04:39.608 06:29:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.608 06:29:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:39.608 06:29:50 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:39.608 06:29:50 -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.608 06:29:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 06:29:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6793692 kB' 'MemUsed: 5443400 kB' 'SwapCached: 0 kB' 'Active: 466768 kB' 'Inactive: 2533236 kB' 'Active(anon): 127532 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'FilePages: 2882968 kB' 'Mapped: 50752 kB' 'AnonPages: 118704 kB' 'Shmem: 10496 kB' 'KernelStack: 6608 kB' 'PageTables: 3996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81808 kB' 'Slab: 186300 kB' 'SReclaimable: 81808 kB' 'SUnreclaim: 104492 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 06:29:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.609 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.609 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 06:29:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.609 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.609 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 06:29:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.609 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.609 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 06:29:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.609 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.609 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 06:29:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.609 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.609 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 06:29:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.609 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.609 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 06:29:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.609 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.609 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 06:29:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.609 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.609 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 06:29:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.609 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.609 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 06:29:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.609 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.609 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 06:29:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.609 06:29:50 -- setup/common.sh@32 -- # continue 00:04:39.609 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 06:29:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.609 06:29:50 -- setup/common.sh@33 -- # echo 0 00:04:39.609 06:29:50 -- setup/common.sh@33 -- # return 0 00:04:39.609 06:29:50 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.609 06:29:50 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.609 06:29:50 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.609 06:29:50 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.609 06:29:50 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:39.609 node0=1024 expecting 1024 00:04:39.609 ************************************ 00:04:39.609 END TEST default_setup 00:04:39.609 ************************************ 00:04:39.609 06:29:50 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:39.609 00:04:39.609 real 0m1.160s 00:04:39.609 user 0m0.465s 00:04:39.609 sys 0m0.598s 00:04:39.609 06:29:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:39.609 06:29:50 -- common/autotest_common.sh@10 -- # set +x 00:04:39.609 06:29:50 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:39.609 06:29:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:39.609 06:29:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:39.609 06:29:50 -- common/autotest_common.sh@10 -- # set +x 00:04:39.609 ************************************ 00:04:39.609 START TEST per_node_1G_alloc 00:04:39.609 ************************************ 00:04:39.609 06:29:50 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:04:39.609 06:29:50 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:39.609 06:29:50 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:39.609 06:29:50 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:39.609 06:29:50 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:39.609 06:29:50 -- setup/hugepages.sh@51 -- # shift 00:04:39.609 06:29:50 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:39.609 06:29:50 -- setup/hugepages.sh@52 -- # local node_ids 00:04:39.609 06:29:50 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:39.609 06:29:50 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:39.609 06:29:50 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:39.609 06:29:50 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:39.609 06:29:50 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:39.609 06:29:50 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:39.609 06:29:50 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:39.609 06:29:50 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:39.609 06:29:50 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:39.609 06:29:50 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:39.609 06:29:50 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:39.609 06:29:50 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:39.609 06:29:50 -- setup/hugepages.sh@73 -- # return 0 00:04:39.609 06:29:50 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:39.609 06:29:50 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:39.609 06:29:50 -- setup/hugepages.sh@146 -- # setup output 00:04:39.609 06:29:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.609 06:29:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:40.185 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:40.185 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:40.185 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:40.185 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:40.185 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:40.185 06:29:50 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:40.185 06:29:50 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:40.185 06:29:50 -- setup/hugepages.sh@89 -- # local node 00:04:40.185 06:29:50 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:40.185 06:29:50 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:40.185 06:29:50 -- setup/hugepages.sh@92 -- # local surp 00:04:40.185 06:29:50 -- setup/hugepages.sh@93 -- # local resv 00:04:40.185 06:29:50 -- setup/hugepages.sh@94 -- # local anon 00:04:40.185 06:29:50 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:40.185 06:29:50 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:40.185 06:29:50 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:40.185 06:29:50 -- setup/common.sh@18 -- # local node= 00:04:40.185 06:29:50 -- setup/common.sh@19 -- # local var val 00:04:40.185 06:29:50 -- setup/common.sh@20 -- # local mem_f mem 00:04:40.185 06:29:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.185 06:29:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.185 06:29:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.185 06:29:50 -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.185 06:29:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.185 06:29:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 7842640 kB' 'MemAvailable: 10518864 kB' 'Buffers: 3704 kB' 'Cached: 2879264 kB' 'SwapCached: 0 kB' 'Active: 467752 kB' 'Inactive: 2533236 kB' 'Active(anon): 128516 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 119584 kB' 'Mapped: 50924 kB' 'Shmem: 10496 kB' 'KReclaimable: 81808 kB' 'Slab: 186308 kB' 'SReclaimable: 81808 kB' 'SUnreclaim: 104500 kB' 'KernelStack: 6636 kB' 'PageTables: 3996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982860 kB' 'Committed_AS: 315112 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.185 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.185 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.186 06:29:50 -- setup/common.sh@33 -- # echo 0 00:04:40.186 06:29:50 -- setup/common.sh@33 -- # return 0 00:04:40.186 06:29:50 -- setup/hugepages.sh@97 -- # anon=0 00:04:40.186 06:29:50 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:40.186 06:29:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.186 06:29:50 -- setup/common.sh@18 -- # local node= 00:04:40.186 06:29:50 -- setup/common.sh@19 -- # local var val 00:04:40.186 06:29:50 -- setup/common.sh@20 -- # local mem_f mem 00:04:40.186 06:29:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.186 06:29:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.186 06:29:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.186 06:29:50 -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.186 06:29:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 7843308 kB' 'MemAvailable: 10519532 kB' 'Buffers: 3704 kB' 'Cached: 2879264 kB' 'SwapCached: 0 kB' 'Active: 467416 kB' 'Inactive: 2533236 kB' 'Active(anon): 128180 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 119220 kB' 'Mapped: 50816 kB' 'Shmem: 10496 kB' 'KReclaimable: 81808 kB' 'Slab: 186296 kB' 'SReclaimable: 81808 kB' 'SUnreclaim: 104488 kB' 'KernelStack: 6588 kB' 'PageTables: 3848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982860 kB' 'Committed_AS: 315112 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55640 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.186 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.186 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.187 06:29:50 -- setup/common.sh@33 -- # echo 0 00:04:40.187 06:29:50 -- setup/common.sh@33 -- # return 0 00:04:40.187 06:29:50 -- setup/hugepages.sh@99 -- # surp=0 00:04:40.187 06:29:50 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:40.187 06:29:50 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:40.187 06:29:50 -- setup/common.sh@18 -- # local node= 00:04:40.187 06:29:50 -- setup/common.sh@19 -- # local var val 00:04:40.187 06:29:50 -- setup/common.sh@20 -- # local mem_f mem 00:04:40.187 06:29:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.187 06:29:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.187 06:29:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.187 06:29:50 -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.187 06:29:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.187 06:29:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 7843828 kB' 'MemAvailable: 10520052 kB' 'Buffers: 3704 kB' 'Cached: 2879264 kB' 'SwapCached: 0 kB' 'Active: 467036 kB' 'Inactive: 2533236 kB' 'Active(anon): 127800 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118840 kB' 'Mapped: 50700 kB' 'Shmem: 10496 kB' 'KReclaimable: 81808 kB' 'Slab: 186308 kB' 'SReclaimable: 81808 kB' 'SUnreclaim: 104500 kB' 'KernelStack: 6592 kB' 'PageTables: 3956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982860 kB' 'Committed_AS: 315112 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55640 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.187 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.187 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.188 06:29:50 -- setup/common.sh@33 -- # echo 0 00:04:40.188 06:29:50 -- setup/common.sh@33 -- # return 0 00:04:40.188 06:29:50 -- setup/hugepages.sh@100 -- # resv=0 00:04:40.188 06:29:50 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:40.188 nr_hugepages=512 00:04:40.188 06:29:50 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:40.188 resv_hugepages=0 00:04:40.188 06:29:50 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:40.188 surplus_hugepages=0 00:04:40.188 anon_hugepages=0 00:04:40.188 06:29:50 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:40.188 06:29:50 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:40.188 06:29:50 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:40.188 06:29:50 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:40.188 06:29:50 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:40.188 06:29:50 -- setup/common.sh@18 -- # local node= 00:04:40.188 06:29:50 -- setup/common.sh@19 -- # local var val 00:04:40.188 06:29:50 -- setup/common.sh@20 -- # local mem_f mem 00:04:40.188 06:29:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.188 06:29:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.188 06:29:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.188 06:29:50 -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.188 06:29:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.188 06:29:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 7843932 kB' 'MemAvailable: 10520156 kB' 'Buffers: 3704 kB' 'Cached: 2879264 kB' 'SwapCached: 0 kB' 'Active: 466704 kB' 'Inactive: 2533236 kB' 'Active(anon): 127468 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118544 kB' 'Mapped: 50700 kB' 'Shmem: 10496 kB' 'KReclaimable: 81808 kB' 'Slab: 186304 kB' 'SReclaimable: 81808 kB' 'SUnreclaim: 104496 kB' 'KernelStack: 6592 kB' 'PageTables: 3952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982860 kB' 'Committed_AS: 315112 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55656 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.188 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.188 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.189 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.189 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.190 06:29:50 -- setup/common.sh@33 -- # echo 512 00:04:40.190 06:29:50 -- setup/common.sh@33 -- # return 0 00:04:40.190 06:29:50 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:40.190 06:29:50 -- setup/hugepages.sh@112 -- # get_nodes 00:04:40.190 06:29:50 -- setup/hugepages.sh@27 -- # local node 00:04:40.190 06:29:50 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:40.190 06:29:50 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:40.190 06:29:50 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:40.190 06:29:50 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:40.190 06:29:50 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:40.190 06:29:50 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:40.190 06:29:50 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:40.190 06:29:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.190 06:29:50 -- setup/common.sh@18 -- # local node=0 00:04:40.190 06:29:50 -- setup/common.sh@19 -- # local var val 00:04:40.190 06:29:50 -- setup/common.sh@20 -- # local mem_f mem 00:04:40.190 06:29:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.190 06:29:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:40.190 06:29:50 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:40.190 06:29:50 -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.190 06:29:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 7843932 kB' 'MemUsed: 4393160 kB' 'SwapCached: 0 kB' 'Active: 466912 kB' 'Inactive: 2533236 kB' 'Active(anon): 127676 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'FilePages: 2882968 kB' 'Mapped: 50700 kB' 'AnonPages: 118752 kB' 'Shmem: 10496 kB' 'KernelStack: 6576 kB' 'PageTables: 3900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81808 kB' 'Slab: 186304 kB' 'SReclaimable: 81808 kB' 'SUnreclaim: 104496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # continue 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.190 06:29:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.190 06:29:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.190 06:29:50 -- setup/common.sh@33 -- # echo 0 00:04:40.190 06:29:50 -- setup/common.sh@33 -- # return 0 00:04:40.190 06:29:50 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:40.190 06:29:50 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:40.190 06:29:50 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:40.190 06:29:50 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:40.191 06:29:50 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:40.191 node0=512 expecting 512 00:04:40.191 06:29:50 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:40.191 00:04:40.191 real 0m0.605s 00:04:40.191 user 0m0.253s 00:04:40.191 sys 0m0.363s 00:04:40.191 06:29:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:40.191 06:29:50 -- common/autotest_common.sh@10 -- # set +x 00:04:40.191 ************************************ 00:04:40.191 END TEST per_node_1G_alloc 00:04:40.191 ************************************ 00:04:40.451 06:29:50 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:40.451 06:29:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:40.451 06:29:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:40.451 06:29:50 -- common/autotest_common.sh@10 -- # set +x 00:04:40.451 ************************************ 00:04:40.451 START TEST even_2G_alloc 00:04:40.451 ************************************ 00:04:40.451 06:29:50 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:04:40.451 06:29:50 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:40.451 06:29:50 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:40.451 06:29:50 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:40.451 06:29:50 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:40.451 06:29:50 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:40.451 06:29:50 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:40.451 06:29:50 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:40.451 06:29:50 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:40.451 06:29:50 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:40.451 06:29:50 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:40.451 06:29:50 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:40.451 06:29:50 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:40.451 06:29:50 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:40.451 06:29:50 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:40.451 06:29:50 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:40.451 06:29:50 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:40.451 06:29:50 -- setup/hugepages.sh@83 -- # : 0 00:04:40.451 06:29:50 -- setup/hugepages.sh@84 -- # : 0 00:04:40.451 06:29:50 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:40.451 06:29:50 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:40.451 06:29:50 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:40.451 06:29:50 -- setup/hugepages.sh@153 -- # setup output 00:04:40.451 06:29:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.451 06:29:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:40.711 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:40.711 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:40.711 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:40.711 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:40.711 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:40.976 06:29:51 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:40.976 06:29:51 -- setup/hugepages.sh@89 -- # local node 00:04:40.976 06:29:51 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:40.976 06:29:51 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:40.976 06:29:51 -- setup/hugepages.sh@92 -- # local surp 00:04:40.976 06:29:51 -- setup/hugepages.sh@93 -- # local resv 00:04:40.976 06:29:51 -- setup/hugepages.sh@94 -- # local anon 00:04:40.976 06:29:51 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:40.976 06:29:51 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:40.976 06:29:51 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:40.976 06:29:51 -- setup/common.sh@18 -- # local node= 00:04:40.976 06:29:51 -- setup/common.sh@19 -- # local var val 00:04:40.976 06:29:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:40.976 06:29:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.976 06:29:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.976 06:29:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.976 06:29:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.976 06:29:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.976 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.976 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.976 06:29:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6792484 kB' 'MemAvailable: 9468708 kB' 'Buffers: 3704 kB' 'Cached: 2879264 kB' 'SwapCached: 0 kB' 'Active: 467216 kB' 'Inactive: 2533236 kB' 'Active(anon): 127980 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 119096 kB' 'Mapped: 50824 kB' 'Shmem: 10496 kB' 'KReclaimable: 81808 kB' 'Slab: 186128 kB' 'SReclaimable: 81808 kB' 'SUnreclaim: 104320 kB' 'KernelStack: 6584 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 315112 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:40.976 06:29:51 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.976 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.976 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.976 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.976 06:29:51 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.976 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.976 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.976 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.976 06:29:51 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.976 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.976 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.976 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.976 06:29:51 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.976 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.976 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.976 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.976 06:29:51 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.976 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.976 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.976 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.976 06:29:51 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.976 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.976 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.976 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.976 06:29:51 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.976 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.976 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.976 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.976 06:29:51 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.976 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.976 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.976 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.976 06:29:51 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.976 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.976 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.976 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.976 06:29:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.976 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.976 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.976 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.976 06:29:51 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.976 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.976 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.976 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.976 06:29:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.976 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.976 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.976 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.976 06:29:51 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.976 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.976 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.976 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.976 06:29:51 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.976 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.976 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.976 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.976 06:29:51 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.976 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.976 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.976 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.976 06:29:51 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.977 06:29:51 -- setup/common.sh@33 -- # echo 0 00:04:40.977 06:29:51 -- setup/common.sh@33 -- # return 0 00:04:40.977 06:29:51 -- setup/hugepages.sh@97 -- # anon=0 00:04:40.977 06:29:51 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:40.977 06:29:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.977 06:29:51 -- setup/common.sh@18 -- # local node= 00:04:40.977 06:29:51 -- setup/common.sh@19 -- # local var val 00:04:40.977 06:29:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:40.977 06:29:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.977 06:29:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.977 06:29:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.977 06:29:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.977 06:29:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.977 06:29:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6792484 kB' 'MemAvailable: 9468708 kB' 'Buffers: 3704 kB' 'Cached: 2879264 kB' 'SwapCached: 0 kB' 'Active: 467000 kB' 'Inactive: 2533236 kB' 'Active(anon): 127764 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118852 kB' 'Mapped: 50824 kB' 'Shmem: 10496 kB' 'KReclaimable: 81808 kB' 'Slab: 186164 kB' 'SReclaimable: 81808 kB' 'SUnreclaim: 104356 kB' 'KernelStack: 6624 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 315112 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.977 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.977 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.978 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.978 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.979 06:29:51 -- setup/common.sh@33 -- # echo 0 00:04:40.979 06:29:51 -- setup/common.sh@33 -- # return 0 00:04:40.979 06:29:51 -- setup/hugepages.sh@99 -- # surp=0 00:04:40.979 06:29:51 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:40.979 06:29:51 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:40.979 06:29:51 -- setup/common.sh@18 -- # local node= 00:04:40.979 06:29:51 -- setup/common.sh@19 -- # local var val 00:04:40.979 06:29:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:40.979 06:29:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.979 06:29:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.979 06:29:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.979 06:29:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.979 06:29:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.979 06:29:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6792484 kB' 'MemAvailable: 9468708 kB' 'Buffers: 3704 kB' 'Cached: 2879264 kB' 'SwapCached: 0 kB' 'Active: 466940 kB' 'Inactive: 2533236 kB' 'Active(anon): 127704 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118784 kB' 'Mapped: 50700 kB' 'Shmem: 10496 kB' 'KReclaimable: 81808 kB' 'Slab: 186168 kB' 'SReclaimable: 81808 kB' 'SUnreclaim: 104360 kB' 'KernelStack: 6608 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 315112 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.979 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.979 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.980 06:29:51 -- setup/common.sh@33 -- # echo 0 00:04:40.980 06:29:51 -- setup/common.sh@33 -- # return 0 00:04:40.980 06:29:51 -- setup/hugepages.sh@100 -- # resv=0 00:04:40.980 06:29:51 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:40.980 nr_hugepages=1024 00:04:40.980 06:29:51 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:40.980 resv_hugepages=0 00:04:40.980 06:29:51 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:40.980 surplus_hugepages=0 00:04:40.980 06:29:51 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:40.980 anon_hugepages=0 00:04:40.980 06:29:51 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:40.980 06:29:51 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:40.980 06:29:51 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:40.980 06:29:51 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:40.980 06:29:51 -- setup/common.sh@18 -- # local node= 00:04:40.980 06:29:51 -- setup/common.sh@19 -- # local var val 00:04:40.980 06:29:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:40.980 06:29:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.980 06:29:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.980 06:29:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.980 06:29:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.980 06:29:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.980 06:29:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6792232 kB' 'MemAvailable: 9468456 kB' 'Buffers: 3704 kB' 'Cached: 2879264 kB' 'SwapCached: 0 kB' 'Active: 466892 kB' 'Inactive: 2533236 kB' 'Active(anon): 127656 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118784 kB' 'Mapped: 50700 kB' 'Shmem: 10496 kB' 'KReclaimable: 81808 kB' 'Slab: 186164 kB' 'SReclaimable: 81808 kB' 'SUnreclaim: 104356 kB' 'KernelStack: 6608 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 315112 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.980 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.980 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.981 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.981 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.982 06:29:51 -- setup/common.sh@33 -- # echo 1024 00:04:40.982 06:29:51 -- setup/common.sh@33 -- # return 0 00:04:40.982 06:29:51 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:40.982 06:29:51 -- setup/hugepages.sh@112 -- # get_nodes 00:04:40.982 06:29:51 -- setup/hugepages.sh@27 -- # local node 00:04:40.982 06:29:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:40.982 06:29:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:40.982 06:29:51 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:40.982 06:29:51 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:40.982 06:29:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:40.982 06:29:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:40.982 06:29:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:40.982 06:29:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.982 06:29:51 -- setup/common.sh@18 -- # local node=0 00:04:40.982 06:29:51 -- setup/common.sh@19 -- # local var val 00:04:40.982 06:29:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:40.982 06:29:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.982 06:29:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:40.982 06:29:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:40.982 06:29:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.982 06:29:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.982 06:29:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6792232 kB' 'MemUsed: 5444860 kB' 'SwapCached: 0 kB' 'Active: 466896 kB' 'Inactive: 2533236 kB' 'Active(anon): 127660 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'FilePages: 2882968 kB' 'Mapped: 50700 kB' 'AnonPages: 118748 kB' 'Shmem: 10496 kB' 'KernelStack: 6592 kB' 'PageTables: 3956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81808 kB' 'Slab: 186164 kB' 'SReclaimable: 81808 kB' 'SUnreclaim: 104356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.982 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.982 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.983 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.983 06:29:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.983 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.983 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.983 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.983 06:29:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.983 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.983 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.983 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.983 06:29:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.983 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.983 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.983 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.983 06:29:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.983 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.983 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.983 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.983 06:29:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.983 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.983 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.983 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.983 06:29:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.983 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.983 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.983 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.983 06:29:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.983 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.983 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.983 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.983 06:29:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.983 06:29:51 -- setup/common.sh@32 -- # continue 00:04:40.983 06:29:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.983 06:29:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.983 06:29:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.983 06:29:51 -- setup/common.sh@33 -- # echo 0 00:04:40.983 06:29:51 -- setup/common.sh@33 -- # return 0 00:04:40.983 06:29:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:40.983 06:29:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:40.983 node0=1024 expecting 1024 00:04:40.983 06:29:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:40.983 06:29:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:40.983 06:29:51 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:40.983 06:29:51 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:40.983 00:04:40.983 real 0m0.621s 00:04:40.983 user 0m0.268s 00:04:40.983 sys 0m0.355s 00:04:40.983 06:29:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:40.983 06:29:51 -- common/autotest_common.sh@10 -- # set +x 00:04:40.983 ************************************ 00:04:40.983 END TEST even_2G_alloc 00:04:40.983 ************************************ 00:04:40.983 06:29:51 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:40.983 06:29:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:40.983 06:29:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:40.983 06:29:51 -- common/autotest_common.sh@10 -- # set +x 00:04:40.983 ************************************ 00:04:40.983 START TEST odd_alloc 00:04:40.983 ************************************ 00:04:40.983 06:29:51 -- common/autotest_common.sh@1114 -- # odd_alloc 00:04:40.983 06:29:51 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:40.983 06:29:51 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:40.983 06:29:51 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:40.983 06:29:51 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:40.983 06:29:51 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:40.983 06:29:51 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:40.983 06:29:51 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:40.983 06:29:51 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:40.983 06:29:51 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:40.983 06:29:51 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:40.983 06:29:51 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:40.983 06:29:51 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:40.983 06:29:51 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:40.983 06:29:51 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:40.983 06:29:51 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:40.983 06:29:51 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:40.983 06:29:51 -- setup/hugepages.sh@83 -- # : 0 00:04:40.983 06:29:51 -- setup/hugepages.sh@84 -- # : 0 00:04:40.983 06:29:51 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:40.983 06:29:51 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:40.983 06:29:51 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:40.983 06:29:51 -- setup/hugepages.sh@160 -- # setup output 00:04:40.983 06:29:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.983 06:29:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:41.558 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:41.558 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:41.558 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:41.558 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:41.558 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:41.558 06:29:52 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:41.558 06:29:52 -- setup/hugepages.sh@89 -- # local node 00:04:41.558 06:29:52 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:41.558 06:29:52 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:41.558 06:29:52 -- setup/hugepages.sh@92 -- # local surp 00:04:41.558 06:29:52 -- setup/hugepages.sh@93 -- # local resv 00:04:41.558 06:29:52 -- setup/hugepages.sh@94 -- # local anon 00:04:41.558 06:29:52 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:41.558 06:29:52 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:41.558 06:29:52 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:41.558 06:29:52 -- setup/common.sh@18 -- # local node= 00:04:41.558 06:29:52 -- setup/common.sh@19 -- # local var val 00:04:41.558 06:29:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.558 06:29:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.558 06:29:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.558 06:29:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.558 06:29:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.559 06:29:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6783312 kB' 'MemAvailable: 9459536 kB' 'Buffers: 3704 kB' 'Cached: 2879264 kB' 'SwapCached: 0 kB' 'Active: 467696 kB' 'Inactive: 2533236 kB' 'Active(anon): 128460 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 119560 kB' 'Mapped: 50876 kB' 'Shmem: 10496 kB' 'KReclaimable: 81808 kB' 'Slab: 186148 kB' 'SReclaimable: 81808 kB' 'SUnreclaim: 104340 kB' 'KernelStack: 6716 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457548 kB' 'Committed_AS: 315112 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55688 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.559 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.559 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.560 06:29:52 -- setup/common.sh@33 -- # echo 0 00:04:41.560 06:29:52 -- setup/common.sh@33 -- # return 0 00:04:41.560 06:29:52 -- setup/hugepages.sh@97 -- # anon=0 00:04:41.560 06:29:52 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:41.560 06:29:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.560 06:29:52 -- setup/common.sh@18 -- # local node= 00:04:41.560 06:29:52 -- setup/common.sh@19 -- # local var val 00:04:41.560 06:29:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.560 06:29:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.560 06:29:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.560 06:29:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.560 06:29:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.560 06:29:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.560 06:29:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6783572 kB' 'MemAvailable: 9459796 kB' 'Buffers: 3704 kB' 'Cached: 2879264 kB' 'SwapCached: 0 kB' 'Active: 467444 kB' 'Inactive: 2533236 kB' 'Active(anon): 128208 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 119300 kB' 'Mapped: 50820 kB' 'Shmem: 10496 kB' 'KReclaimable: 81808 kB' 'Slab: 186128 kB' 'SReclaimable: 81808 kB' 'SUnreclaim: 104320 kB' 'KernelStack: 6620 kB' 'PageTables: 3952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457548 kB' 'Committed_AS: 315112 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55688 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.560 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.560 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.561 06:29:52 -- setup/common.sh@33 -- # echo 0 00:04:41.561 06:29:52 -- setup/common.sh@33 -- # return 0 00:04:41.561 06:29:52 -- setup/hugepages.sh@99 -- # surp=0 00:04:41.561 06:29:52 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:41.561 06:29:52 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:41.561 06:29:52 -- setup/common.sh@18 -- # local node= 00:04:41.561 06:29:52 -- setup/common.sh@19 -- # local var val 00:04:41.561 06:29:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.561 06:29:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.561 06:29:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.561 06:29:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.561 06:29:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.561 06:29:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.561 06:29:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6783320 kB' 'MemAvailable: 9459544 kB' 'Buffers: 3704 kB' 'Cached: 2879264 kB' 'SwapCached: 0 kB' 'Active: 467120 kB' 'Inactive: 2533236 kB' 'Active(anon): 127884 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 119032 kB' 'Mapped: 50960 kB' 'Shmem: 10496 kB' 'KReclaimable: 81808 kB' 'Slab: 186148 kB' 'SReclaimable: 81808 kB' 'SUnreclaim: 104340 kB' 'KernelStack: 6640 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457548 kB' 'Committed_AS: 317604 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55688 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.561 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.561 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.562 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.562 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.563 06:29:52 -- setup/common.sh@33 -- # echo 0 00:04:41.563 06:29:52 -- setup/common.sh@33 -- # return 0 00:04:41.563 06:29:52 -- setup/hugepages.sh@100 -- # resv=0 00:04:41.563 06:29:52 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:41.563 nr_hugepages=1025 00:04:41.563 resv_hugepages=0 00:04:41.563 06:29:52 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:41.563 06:29:52 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:41.563 surplus_hugepages=0 00:04:41.563 anon_hugepages=0 00:04:41.563 06:29:52 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:41.563 06:29:52 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:41.563 06:29:52 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:41.563 06:29:52 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:41.563 06:29:52 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:41.563 06:29:52 -- setup/common.sh@18 -- # local node= 00:04:41.563 06:29:52 -- setup/common.sh@19 -- # local var val 00:04:41.563 06:29:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.563 06:29:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.563 06:29:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.563 06:29:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.563 06:29:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.563 06:29:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.563 06:29:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6783320 kB' 'MemAvailable: 9459544 kB' 'Buffers: 3704 kB' 'Cached: 2879264 kB' 'SwapCached: 0 kB' 'Active: 466864 kB' 'Inactive: 2533236 kB' 'Active(anon): 127628 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118520 kB' 'Mapped: 50760 kB' 'Shmem: 10496 kB' 'KReclaimable: 81808 kB' 'Slab: 186156 kB' 'SReclaimable: 81808 kB' 'SUnreclaim: 104348 kB' 'KernelStack: 6656 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457548 kB' 'Committed_AS: 315112 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55656 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.563 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.563 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.564 06:29:52 -- setup/common.sh@33 -- # echo 1025 00:04:41.564 06:29:52 -- setup/common.sh@33 -- # return 0 00:04:41.564 06:29:52 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:41.564 06:29:52 -- setup/hugepages.sh@112 -- # get_nodes 00:04:41.564 06:29:52 -- setup/hugepages.sh@27 -- # local node 00:04:41.564 06:29:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.564 06:29:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:41.564 06:29:52 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:41.564 06:29:52 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:41.564 06:29:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.564 06:29:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.564 06:29:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:41.564 06:29:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.564 06:29:52 -- setup/common.sh@18 -- # local node=0 00:04:41.564 06:29:52 -- setup/common.sh@19 -- # local var val 00:04:41.564 06:29:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.564 06:29:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.564 06:29:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:41.564 06:29:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:41.564 06:29:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.564 06:29:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.564 06:29:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6783320 kB' 'MemUsed: 5453772 kB' 'SwapCached: 0 kB' 'Active: 466904 kB' 'Inactive: 2533240 kB' 'Active(anon): 127668 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'FilePages: 2882972 kB' 'Mapped: 50700 kB' 'AnonPages: 118880 kB' 'Shmem: 10496 kB' 'KernelStack: 6608 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81808 kB' 'Slab: 186140 kB' 'SReclaimable: 81808 kB' 'SUnreclaim: 104332 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.564 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.564 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # continue 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.565 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.565 06:29:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.565 06:29:52 -- setup/common.sh@33 -- # echo 0 00:04:41.565 06:29:52 -- setup/common.sh@33 -- # return 0 00:04:41.565 06:29:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.565 06:29:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.565 06:29:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.565 node0=1025 expecting 1025 00:04:41.565 ************************************ 00:04:41.565 END TEST odd_alloc 00:04:41.565 ************************************ 00:04:41.565 06:29:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.565 06:29:52 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:41.565 06:29:52 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:41.565 00:04:41.565 real 0m0.591s 00:04:41.565 user 0m0.241s 00:04:41.565 sys 0m0.363s 00:04:41.565 06:29:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:41.565 06:29:52 -- common/autotest_common.sh@10 -- # set +x 00:04:41.827 06:29:52 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:41.827 06:29:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.827 06:29:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.827 06:29:52 -- common/autotest_common.sh@10 -- # set +x 00:04:41.827 ************************************ 00:04:41.827 START TEST custom_alloc 00:04:41.827 ************************************ 00:04:41.827 06:29:52 -- common/autotest_common.sh@1114 -- # custom_alloc 00:04:41.827 06:29:52 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:41.827 06:29:52 -- setup/hugepages.sh@169 -- # local node 00:04:41.827 06:29:52 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:41.827 06:29:52 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:41.827 06:29:52 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:41.827 06:29:52 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:41.827 06:29:52 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:41.827 06:29:52 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:41.827 06:29:52 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:41.827 06:29:52 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:41.827 06:29:52 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:41.827 06:29:52 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:41.827 06:29:52 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:41.827 06:29:52 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:41.827 06:29:52 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:41.827 06:29:52 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:41.827 06:29:52 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:41.827 06:29:52 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:41.827 06:29:52 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:41.827 06:29:52 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.827 06:29:52 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:41.827 06:29:52 -- setup/hugepages.sh@83 -- # : 0 00:04:41.827 06:29:52 -- setup/hugepages.sh@84 -- # : 0 00:04:41.827 06:29:52 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.827 06:29:52 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:41.827 06:29:52 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:41.827 06:29:52 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:41.827 06:29:52 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:41.827 06:29:52 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:41.827 06:29:52 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:41.827 06:29:52 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:41.827 06:29:52 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:41.827 06:29:52 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:41.827 06:29:52 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:41.827 06:29:52 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:41.827 06:29:52 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:41.827 06:29:52 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:41.827 06:29:52 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:41.827 06:29:52 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:41.827 06:29:52 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:41.827 06:29:52 -- setup/hugepages.sh@78 -- # return 0 00:04:41.827 06:29:52 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:41.827 06:29:52 -- setup/hugepages.sh@187 -- # setup output 00:04:41.827 06:29:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.827 06:29:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:42.088 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:42.088 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:42.088 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:42.088 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:42.088 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:42.352 06:29:52 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:42.352 06:29:52 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:42.352 06:29:52 -- setup/hugepages.sh@89 -- # local node 00:04:42.352 06:29:52 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:42.352 06:29:52 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:42.352 06:29:52 -- setup/hugepages.sh@92 -- # local surp 00:04:42.352 06:29:52 -- setup/hugepages.sh@93 -- # local resv 00:04:42.352 06:29:52 -- setup/hugepages.sh@94 -- # local anon 00:04:42.352 06:29:52 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:42.352 06:29:52 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:42.352 06:29:52 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:42.352 06:29:52 -- setup/common.sh@18 -- # local node= 00:04:42.352 06:29:52 -- setup/common.sh@19 -- # local var val 00:04:42.352 06:29:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.352 06:29:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.352 06:29:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.352 06:29:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.352 06:29:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.352 06:29:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.352 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.352 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.352 06:29:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 7834676 kB' 'MemAvailable: 10510904 kB' 'Buffers: 3704 kB' 'Cached: 2879268 kB' 'SwapCached: 0 kB' 'Active: 467436 kB' 'Inactive: 2533240 kB' 'Active(anon): 128200 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 32 kB' 'AnonPages: 119316 kB' 'Mapped: 50784 kB' 'Shmem: 10496 kB' 'KReclaimable: 81808 kB' 'Slab: 186144 kB' 'SReclaimable: 81808 kB' 'SUnreclaim: 104336 kB' 'KernelStack: 6620 kB' 'PageTables: 3988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982860 kB' 'Committed_AS: 315112 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:42.352 06:29:52 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.352 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.352 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.352 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.352 06:29:52 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.352 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.352 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.352 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.352 06:29:52 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.352 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.352 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.352 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.352 06:29:52 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.352 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.352 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.352 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.352 06:29:52 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.352 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.352 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.352 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.352 06:29:52 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.352 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.352 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.352 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.352 06:29:52 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.352 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.352 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.352 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.352 06:29:52 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.352 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.352 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.352 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.352 06:29:52 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.352 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.352 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.352 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.352 06:29:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.352 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.352 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.352 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.352 06:29:52 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.352 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.352 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.352 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.352 06:29:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.352 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.352 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.352 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.352 06:29:52 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.352 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.352 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.352 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.353 06:29:52 -- setup/common.sh@33 -- # echo 0 00:04:42.353 06:29:52 -- setup/common.sh@33 -- # return 0 00:04:42.353 06:29:52 -- setup/hugepages.sh@97 -- # anon=0 00:04:42.353 06:29:52 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:42.353 06:29:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.353 06:29:52 -- setup/common.sh@18 -- # local node= 00:04:42.353 06:29:52 -- setup/common.sh@19 -- # local var val 00:04:42.353 06:29:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.353 06:29:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.353 06:29:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.353 06:29:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.353 06:29:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.353 06:29:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 7841016 kB' 'MemAvailable: 10517244 kB' 'Buffers: 3704 kB' 'Cached: 2879268 kB' 'SwapCached: 0 kB' 'Active: 467092 kB' 'Inactive: 2533240 kB' 'Active(anon): 127856 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 32 kB' 'AnonPages: 118896 kB' 'Mapped: 50700 kB' 'Shmem: 10496 kB' 'KReclaimable: 81808 kB' 'Slab: 186208 kB' 'SReclaimable: 81808 kB' 'SUnreclaim: 104400 kB' 'KernelStack: 6576 kB' 'PageTables: 3896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982860 kB' 'Committed_AS: 315112 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55656 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.353 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.353 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.354 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.354 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.355 06:29:52 -- setup/common.sh@33 -- # echo 0 00:04:42.355 06:29:52 -- setup/common.sh@33 -- # return 0 00:04:42.355 06:29:52 -- setup/hugepages.sh@99 -- # surp=0 00:04:42.355 06:29:52 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:42.355 06:29:52 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:42.355 06:29:52 -- setup/common.sh@18 -- # local node= 00:04:42.355 06:29:52 -- setup/common.sh@19 -- # local var val 00:04:42.355 06:29:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.355 06:29:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.355 06:29:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.355 06:29:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.355 06:29:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.355 06:29:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.355 06:29:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 7841016 kB' 'MemAvailable: 10517244 kB' 'Buffers: 3704 kB' 'Cached: 2879268 kB' 'SwapCached: 0 kB' 'Active: 467096 kB' 'Inactive: 2533240 kB' 'Active(anon): 127860 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118940 kB' 'Mapped: 50700 kB' 'Shmem: 10496 kB' 'KReclaimable: 81808 kB' 'Slab: 186200 kB' 'SReclaimable: 81808 kB' 'SUnreclaim: 104392 kB' 'KernelStack: 6592 kB' 'PageTables: 3948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982860 kB' 'Committed_AS: 315112 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55656 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.355 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.355 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.356 06:29:52 -- setup/common.sh@33 -- # echo 0 00:04:42.356 06:29:52 -- setup/common.sh@33 -- # return 0 00:04:42.356 06:29:52 -- setup/hugepages.sh@100 -- # resv=0 00:04:42.356 nr_hugepages=512 00:04:42.356 06:29:52 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:42.356 06:29:52 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:42.356 resv_hugepages=0 00:04:42.356 surplus_hugepages=0 00:04:42.356 06:29:52 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:42.356 06:29:52 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:42.356 anon_hugepages=0 00:04:42.356 06:29:52 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:42.356 06:29:52 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:42.356 06:29:52 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:42.356 06:29:52 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:42.356 06:29:52 -- setup/common.sh@18 -- # local node= 00:04:42.356 06:29:52 -- setup/common.sh@19 -- # local var val 00:04:42.356 06:29:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.356 06:29:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.356 06:29:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.356 06:29:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.356 06:29:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.356 06:29:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.356 06:29:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 7841016 kB' 'MemAvailable: 10517244 kB' 'Buffers: 3704 kB' 'Cached: 2879268 kB' 'SwapCached: 0 kB' 'Active: 467112 kB' 'Inactive: 2533240 kB' 'Active(anon): 127876 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118952 kB' 'Mapped: 50700 kB' 'Shmem: 10496 kB' 'KReclaimable: 81808 kB' 'Slab: 186200 kB' 'SReclaimable: 81808 kB' 'SUnreclaim: 104392 kB' 'KernelStack: 6592 kB' 'PageTables: 3948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982860 kB' 'Committed_AS: 315112 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55656 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.356 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.356 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.357 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.357 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.358 06:29:52 -- setup/common.sh@33 -- # echo 512 00:04:42.358 06:29:52 -- setup/common.sh@33 -- # return 0 00:04:42.358 06:29:52 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:42.358 06:29:52 -- setup/hugepages.sh@112 -- # get_nodes 00:04:42.358 06:29:52 -- setup/hugepages.sh@27 -- # local node 00:04:42.358 06:29:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.358 06:29:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:42.358 06:29:52 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:42.358 06:29:52 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:42.358 06:29:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.358 06:29:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.358 06:29:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:42.358 06:29:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.358 06:29:52 -- setup/common.sh@18 -- # local node=0 00:04:42.358 06:29:52 -- setup/common.sh@19 -- # local var val 00:04:42.358 06:29:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.358 06:29:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.358 06:29:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:42.358 06:29:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:42.358 06:29:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.358 06:29:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.358 06:29:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 7840260 kB' 'MemUsed: 4396832 kB' 'SwapCached: 0 kB' 'Active: 467136 kB' 'Inactive: 2533240 kB' 'Active(anon): 127900 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2882972 kB' 'Mapped: 51740 kB' 'AnonPages: 119012 kB' 'Shmem: 10496 kB' 'KernelStack: 6624 kB' 'PageTables: 4052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81808 kB' 'Slab: 186200 kB' 'SReclaimable: 81808 kB' 'SUnreclaim: 104392 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.358 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.358 06:29:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.359 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.359 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.359 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.359 06:29:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.359 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.359 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.359 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.359 06:29:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.359 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.359 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.359 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.359 06:29:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.359 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.359 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.359 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.359 06:29:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.359 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.359 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.359 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.359 06:29:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.359 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.359 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.359 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.359 06:29:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.359 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.359 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.359 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.359 06:29:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.359 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.359 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.359 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.359 06:29:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.359 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.359 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.359 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.359 06:29:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.359 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.359 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.359 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.359 06:29:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.359 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.359 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.359 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.359 06:29:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.359 06:29:52 -- setup/common.sh@32 -- # continue 00:04:42.359 06:29:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.359 06:29:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.359 06:29:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.359 06:29:52 -- setup/common.sh@33 -- # echo 0 00:04:42.359 06:29:52 -- setup/common.sh@33 -- # return 0 00:04:42.359 node0=512 expecting 512 00:04:42.359 06:29:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:42.359 06:29:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:42.359 06:29:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:42.359 06:29:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:42.359 06:29:52 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:42.359 06:29:52 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:42.359 00:04:42.359 real 0m0.654s 00:04:42.359 user 0m0.276s 00:04:42.359 sys 0m0.380s 00:04:42.359 06:29:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:42.359 ************************************ 00:04:42.359 06:29:52 -- common/autotest_common.sh@10 -- # set +x 00:04:42.359 END TEST custom_alloc 00:04:42.359 ************************************ 00:04:42.359 06:29:53 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:42.359 06:29:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:42.359 06:29:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:42.359 06:29:53 -- common/autotest_common.sh@10 -- # set +x 00:04:42.359 ************************************ 00:04:42.359 START TEST no_shrink_alloc 00:04:42.359 ************************************ 00:04:42.359 06:29:53 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:04:42.359 06:29:53 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:42.359 06:29:53 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:42.359 06:29:53 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:42.359 06:29:53 -- setup/hugepages.sh@51 -- # shift 00:04:42.359 06:29:53 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:42.359 06:29:53 -- setup/hugepages.sh@52 -- # local node_ids 00:04:42.359 06:29:53 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:42.359 06:29:53 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:42.359 06:29:53 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:42.359 06:29:53 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:42.359 06:29:53 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:42.359 06:29:53 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:42.359 06:29:53 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:42.359 06:29:53 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:42.359 06:29:53 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:42.359 06:29:53 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:42.359 06:29:53 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:42.359 06:29:53 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:42.359 06:29:53 -- setup/hugepages.sh@73 -- # return 0 00:04:42.359 06:29:53 -- setup/hugepages.sh@198 -- # setup output 00:04:42.359 06:29:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.359 06:29:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:42.935 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:42.935 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:42.935 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:42.935 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:42.935 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:42.935 06:29:53 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:42.935 06:29:53 -- setup/hugepages.sh@89 -- # local node 00:04:42.935 06:29:53 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:42.935 06:29:53 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:42.935 06:29:53 -- setup/hugepages.sh@92 -- # local surp 00:04:42.935 06:29:53 -- setup/hugepages.sh@93 -- # local resv 00:04:42.935 06:29:53 -- setup/hugepages.sh@94 -- # local anon 00:04:42.935 06:29:53 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:42.935 06:29:53 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:42.935 06:29:53 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:42.935 06:29:53 -- setup/common.sh@18 -- # local node= 00:04:42.935 06:29:53 -- setup/common.sh@19 -- # local var val 00:04:42.935 06:29:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.935 06:29:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.935 06:29:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.935 06:29:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.935 06:29:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.935 06:29:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.935 06:29:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6791360 kB' 'MemAvailable: 9467588 kB' 'Buffers: 3704 kB' 'Cached: 2879268 kB' 'SwapCached: 0 kB' 'Active: 465768 kB' 'Inactive: 2533240 kB' 'Active(anon): 126532 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117616 kB' 'Mapped: 50028 kB' 'Shmem: 10496 kB' 'KReclaimable: 81804 kB' 'Slab: 185964 kB' 'SReclaimable: 81804 kB' 'SUnreclaim: 104160 kB' 'KernelStack: 6556 kB' 'PageTables: 3692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 304288 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55624 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:42.935 06:29:53 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.935 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.935 06:29:53 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.935 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.935 06:29:53 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.935 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.935 06:29:53 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.935 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.935 06:29:53 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.935 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.935 06:29:53 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.935 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.935 06:29:53 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.935 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.935 06:29:53 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.935 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.935 06:29:53 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.935 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.935 06:29:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.935 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.935 06:29:53 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.935 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.935 06:29:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.935 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.935 06:29:53 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.935 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.935 06:29:53 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.935 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.935 06:29:53 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.935 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.935 06:29:53 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.935 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.935 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.936 06:29:53 -- setup/common.sh@33 -- # echo 0 00:04:42.936 06:29:53 -- setup/common.sh@33 -- # return 0 00:04:42.936 06:29:53 -- setup/hugepages.sh@97 -- # anon=0 00:04:42.936 06:29:53 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:42.936 06:29:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.936 06:29:53 -- setup/common.sh@18 -- # local node= 00:04:42.936 06:29:53 -- setup/common.sh@19 -- # local var val 00:04:42.936 06:29:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.936 06:29:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.936 06:29:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.936 06:29:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.936 06:29:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.936 06:29:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.936 06:29:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6791360 kB' 'MemAvailable: 9467588 kB' 'Buffers: 3704 kB' 'Cached: 2879268 kB' 'SwapCached: 0 kB' 'Active: 465688 kB' 'Inactive: 2533240 kB' 'Active(anon): 126452 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117276 kB' 'Mapped: 50028 kB' 'Shmem: 10496 kB' 'KReclaimable: 81804 kB' 'Slab: 185972 kB' 'SReclaimable: 81804 kB' 'SUnreclaim: 104168 kB' 'KernelStack: 6556 kB' 'PageTables: 3688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 304288 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55608 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.936 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.936 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.937 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.937 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.938 06:29:53 -- setup/common.sh@33 -- # echo 0 00:04:42.938 06:29:53 -- setup/common.sh@33 -- # return 0 00:04:42.938 06:29:53 -- setup/hugepages.sh@99 -- # surp=0 00:04:42.938 06:29:53 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:42.938 06:29:53 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:42.938 06:29:53 -- setup/common.sh@18 -- # local node= 00:04:42.938 06:29:53 -- setup/common.sh@19 -- # local var val 00:04:42.938 06:29:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.938 06:29:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.938 06:29:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.938 06:29:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.938 06:29:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.938 06:29:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.938 06:29:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6791360 kB' 'MemAvailable: 9467588 kB' 'Buffers: 3704 kB' 'Cached: 2879268 kB' 'SwapCached: 0 kB' 'Active: 465588 kB' 'Inactive: 2533240 kB' 'Active(anon): 126352 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117432 kB' 'Mapped: 50028 kB' 'Shmem: 10496 kB' 'KReclaimable: 81804 kB' 'Slab: 185976 kB' 'SReclaimable: 81804 kB' 'SUnreclaim: 104172 kB' 'KernelStack: 6492 kB' 'PageTables: 3492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 304288 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55608 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.938 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.938 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.939 06:29:53 -- setup/common.sh@33 -- # echo 0 00:04:42.939 06:29:53 -- setup/common.sh@33 -- # return 0 00:04:42.939 06:29:53 -- setup/hugepages.sh@100 -- # resv=0 00:04:42.939 nr_hugepages=1024 00:04:42.939 resv_hugepages=0 00:04:42.939 surplus_hugepages=0 00:04:42.939 06:29:53 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:42.939 06:29:53 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:42.939 06:29:53 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:42.939 anon_hugepages=0 00:04:42.939 06:29:53 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:42.939 06:29:53 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:42.939 06:29:53 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:42.939 06:29:53 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:42.939 06:29:53 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:42.939 06:29:53 -- setup/common.sh@18 -- # local node= 00:04:42.939 06:29:53 -- setup/common.sh@19 -- # local var val 00:04:42.939 06:29:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.939 06:29:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.939 06:29:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.939 06:29:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.939 06:29:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.939 06:29:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.939 06:29:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6791992 kB' 'MemAvailable: 9468220 kB' 'Buffers: 3704 kB' 'Cached: 2879268 kB' 'SwapCached: 0 kB' 'Active: 465660 kB' 'Inactive: 2533240 kB' 'Active(anon): 126424 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117504 kB' 'Mapped: 50028 kB' 'Shmem: 10496 kB' 'KReclaimable: 81804 kB' 'Slab: 185976 kB' 'SReclaimable: 81804 kB' 'SUnreclaim: 104172 kB' 'KernelStack: 6560 kB' 'PageTables: 3496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 304288 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55608 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.939 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.939 06:29:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.940 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.940 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.941 06:29:53 -- setup/common.sh@33 -- # echo 1024 00:04:42.941 06:29:53 -- setup/common.sh@33 -- # return 0 00:04:42.941 06:29:53 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:42.941 06:29:53 -- setup/hugepages.sh@112 -- # get_nodes 00:04:42.941 06:29:53 -- setup/hugepages.sh@27 -- # local node 00:04:42.941 06:29:53 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.941 06:29:53 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:42.941 06:29:53 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:42.941 06:29:53 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:42.941 06:29:53 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.941 06:29:53 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.941 06:29:53 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:42.941 06:29:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.941 06:29:53 -- setup/common.sh@18 -- # local node=0 00:04:42.941 06:29:53 -- setup/common.sh@19 -- # local var val 00:04:42.941 06:29:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.941 06:29:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.941 06:29:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:42.941 06:29:53 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:42.941 06:29:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.941 06:29:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.941 06:29:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6791992 kB' 'MemUsed: 5445100 kB' 'SwapCached: 0 kB' 'Active: 465476 kB' 'Inactive: 2533240 kB' 'Active(anon): 126240 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2882972 kB' 'Mapped: 49908 kB' 'AnonPages: 117280 kB' 'Shmem: 10496 kB' 'KernelStack: 6564 kB' 'PageTables: 3836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81804 kB' 'Slab: 186028 kB' 'SReclaimable: 81804 kB' 'SUnreclaim: 104224 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.941 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.941 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.942 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.942 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.942 06:29:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.942 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.942 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.942 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.942 06:29:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.942 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.942 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.942 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.942 06:29:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.942 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.942 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.942 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.942 06:29:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.942 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.942 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.942 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.942 06:29:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.942 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.942 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.942 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.942 06:29:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.942 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.942 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.942 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.942 06:29:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.942 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.942 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.942 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.942 06:29:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.942 06:29:53 -- setup/common.sh@32 -- # continue 00:04:42.942 06:29:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.942 06:29:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.942 06:29:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.942 06:29:53 -- setup/common.sh@33 -- # echo 0 00:04:42.942 06:29:53 -- setup/common.sh@33 -- # return 0 00:04:42.942 06:29:53 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:42.942 06:29:53 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:42.942 06:29:53 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:42.942 06:29:53 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:42.942 node0=1024 expecting 1024 00:04:42.942 06:29:53 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:42.942 06:29:53 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:42.942 06:29:53 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:42.942 06:29:53 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:42.942 06:29:53 -- setup/hugepages.sh@202 -- # setup output 00:04:42.942 06:29:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.942 06:29:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:43.539 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:43.539 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:43.539 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:43.539 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:43.539 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:43.539 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:43.539 06:29:54 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:43.539 06:29:54 -- setup/hugepages.sh@89 -- # local node 00:04:43.539 06:29:54 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:43.539 06:29:54 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:43.539 06:29:54 -- setup/hugepages.sh@92 -- # local surp 00:04:43.539 06:29:54 -- setup/hugepages.sh@93 -- # local resv 00:04:43.539 06:29:54 -- setup/hugepages.sh@94 -- # local anon 00:04:43.539 06:29:54 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:43.539 06:29:54 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:43.539 06:29:54 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:43.539 06:29:54 -- setup/common.sh@18 -- # local node= 00:04:43.539 06:29:54 -- setup/common.sh@19 -- # local var val 00:04:43.539 06:29:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.539 06:29:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.539 06:29:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.539 06:29:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.539 06:29:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.539 06:29:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.539 06:29:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6796816 kB' 'MemAvailable: 9473044 kB' 'Buffers: 3704 kB' 'Cached: 2879268 kB' 'SwapCached: 0 kB' 'Active: 466220 kB' 'Inactive: 2533240 kB' 'Active(anon): 126984 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118128 kB' 'Mapped: 50296 kB' 'Shmem: 10496 kB' 'KReclaimable: 81804 kB' 'Slab: 186000 kB' 'SReclaimable: 81804 kB' 'SUnreclaim: 104196 kB' 'KernelStack: 6636 kB' 'PageTables: 3924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 304288 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55672 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.539 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.539 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.540 06:29:54 -- setup/common.sh@33 -- # echo 0 00:04:43.540 06:29:54 -- setup/common.sh@33 -- # return 0 00:04:43.540 06:29:54 -- setup/hugepages.sh@97 -- # anon=0 00:04:43.540 06:29:54 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:43.540 06:29:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.540 06:29:54 -- setup/common.sh@18 -- # local node= 00:04:43.540 06:29:54 -- setup/common.sh@19 -- # local var val 00:04:43.540 06:29:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.540 06:29:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.540 06:29:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.540 06:29:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.540 06:29:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.540 06:29:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6796816 kB' 'MemAvailable: 9473044 kB' 'Buffers: 3704 kB' 'Cached: 2879268 kB' 'SwapCached: 0 kB' 'Active: 466240 kB' 'Inactive: 2533240 kB' 'Active(anon): 127004 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118148 kB' 'Mapped: 50228 kB' 'Shmem: 10496 kB' 'KReclaimable: 81804 kB' 'Slab: 186000 kB' 'SReclaimable: 81804 kB' 'SUnreclaim: 104196 kB' 'KernelStack: 6636 kB' 'PageTables: 3936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 304288 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55624 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.540 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.540 06:29:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.541 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.541 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.542 06:29:54 -- setup/common.sh@33 -- # echo 0 00:04:43.542 06:29:54 -- setup/common.sh@33 -- # return 0 00:04:43.542 06:29:54 -- setup/hugepages.sh@99 -- # surp=0 00:04:43.542 06:29:54 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:43.542 06:29:54 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:43.542 06:29:54 -- setup/common.sh@18 -- # local node= 00:04:43.542 06:29:54 -- setup/common.sh@19 -- # local var val 00:04:43.542 06:29:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.542 06:29:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.542 06:29:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.542 06:29:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.542 06:29:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.542 06:29:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6796816 kB' 'MemAvailable: 9473044 kB' 'Buffers: 3704 kB' 'Cached: 2879268 kB' 'SwapCached: 0 kB' 'Active: 465492 kB' 'Inactive: 2533240 kB' 'Active(anon): 126256 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117328 kB' 'Mapped: 50096 kB' 'Shmem: 10496 kB' 'KReclaimable: 81804 kB' 'Slab: 185984 kB' 'SReclaimable: 81804 kB' 'SUnreclaim: 104180 kB' 'KernelStack: 6556 kB' 'PageTables: 3704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 304288 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55624 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.542 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.542 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.543 06:29:54 -- setup/common.sh@33 -- # echo 0 00:04:43.543 06:29:54 -- setup/common.sh@33 -- # return 0 00:04:43.543 06:29:54 -- setup/hugepages.sh@100 -- # resv=0 00:04:43.543 nr_hugepages=1024 00:04:43.543 06:29:54 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:43.543 resv_hugepages=0 00:04:43.543 06:29:54 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:43.543 06:29:54 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:43.543 surplus_hugepages=0 00:04:43.543 anon_hugepages=0 00:04:43.543 06:29:54 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:43.543 06:29:54 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.543 06:29:54 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:43.543 06:29:54 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:43.543 06:29:54 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:43.543 06:29:54 -- setup/common.sh@18 -- # local node= 00:04:43.543 06:29:54 -- setup/common.sh@19 -- # local var val 00:04:43.543 06:29:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.543 06:29:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.543 06:29:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.543 06:29:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.543 06:29:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.543 06:29:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.543 06:29:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6796816 kB' 'MemAvailable: 9473044 kB' 'Buffers: 3704 kB' 'Cached: 2879268 kB' 'SwapCached: 0 kB' 'Active: 465460 kB' 'Inactive: 2533240 kB' 'Active(anon): 126224 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117320 kB' 'Mapped: 49852 kB' 'Shmem: 10496 kB' 'KReclaimable: 81804 kB' 'Slab: 185992 kB' 'SReclaimable: 81804 kB' 'SUnreclaim: 104188 kB' 'KernelStack: 6496 kB' 'PageTables: 3572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458572 kB' 'Committed_AS: 304288 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55624 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 5052416 kB' 'DirectMap1G: 9437184 kB' 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.543 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.543 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.544 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.544 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.545 06:29:54 -- setup/common.sh@33 -- # echo 1024 00:04:43.545 06:29:54 -- setup/common.sh@33 -- # return 0 00:04:43.545 06:29:54 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.545 06:29:54 -- setup/hugepages.sh@112 -- # get_nodes 00:04:43.545 06:29:54 -- setup/hugepages.sh@27 -- # local node 00:04:43.545 06:29:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.545 06:29:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:43.545 06:29:54 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:43.545 06:29:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:43.545 06:29:54 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.545 06:29:54 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.545 06:29:54 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:43.545 06:29:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.545 06:29:54 -- setup/common.sh@18 -- # local node=0 00:04:43.545 06:29:54 -- setup/common.sh@19 -- # local var val 00:04:43.545 06:29:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.545 06:29:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.545 06:29:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:43.545 06:29:54 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:43.545 06:29:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.545 06:29:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.545 06:29:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237092 kB' 'MemFree: 6796816 kB' 'MemUsed: 5440276 kB' 'SwapCached: 0 kB' 'Active: 465492 kB' 'Inactive: 2533240 kB' 'Active(anon): 126256 kB' 'Inactive(anon): 0 kB' 'Active(file): 339236 kB' 'Inactive(file): 2533240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2882972 kB' 'Mapped: 49852 kB' 'AnonPages: 117296 kB' 'Shmem: 10496 kB' 'KernelStack: 6548 kB' 'PageTables: 3520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81804 kB' 'Slab: 185992 kB' 'SReclaimable: 81804 kB' 'SUnreclaim: 104188 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.545 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.545 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.546 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.546 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.546 06:29:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.546 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.546 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.546 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.546 06:29:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.546 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.546 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.546 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.546 06:29:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.546 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.546 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.546 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.546 06:29:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.546 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.546 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.546 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.546 06:29:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.546 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.546 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.546 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.546 06:29:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.546 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.546 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.546 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.546 06:29:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.546 06:29:54 -- setup/common.sh@32 -- # continue 00:04:43.546 06:29:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.546 06:29:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.546 06:29:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.546 06:29:54 -- setup/common.sh@33 -- # echo 0 00:04:43.546 06:29:54 -- setup/common.sh@33 -- # return 0 00:04:43.546 06:29:54 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.546 06:29:54 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.546 node0=1024 expecting 1024 00:04:43.546 06:29:54 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.546 06:29:54 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.546 06:29:54 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:43.546 06:29:54 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:43.546 00:04:43.546 real 0m1.195s 00:04:43.546 user 0m0.474s 00:04:43.546 sys 0m0.770s 00:04:43.546 ************************************ 00:04:43.546 END TEST no_shrink_alloc 00:04:43.546 ************************************ 00:04:43.546 06:29:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:43.546 06:29:54 -- common/autotest_common.sh@10 -- # set +x 00:04:43.809 06:29:54 -- setup/hugepages.sh@217 -- # clear_hp 00:04:43.809 06:29:54 -- setup/hugepages.sh@37 -- # local node hp 00:04:43.809 06:29:54 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:43.809 06:29:54 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:43.809 06:29:54 -- setup/hugepages.sh@41 -- # echo 0 00:04:43.809 06:29:54 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:43.809 06:29:54 -- setup/hugepages.sh@41 -- # echo 0 00:04:43.809 06:29:54 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:43.809 06:29:54 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:43.809 ************************************ 00:04:43.809 END TEST hugepages 00:04:43.809 ************************************ 00:04:43.809 00:04:43.809 real 0m5.416s 00:04:43.809 user 0m2.149s 00:04:43.809 sys 0m3.110s 00:04:43.809 06:29:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:43.809 06:29:54 -- common/autotest_common.sh@10 -- # set +x 00:04:43.809 06:29:54 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:43.809 06:29:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.809 06:29:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.809 06:29:54 -- common/autotest_common.sh@10 -- # set +x 00:04:43.809 ************************************ 00:04:43.809 START TEST driver 00:04:43.809 ************************************ 00:04:43.809 06:29:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:43.809 * Looking for test storage... 00:04:43.809 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:43.809 06:29:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:43.809 06:29:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:43.809 06:29:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:43.809 06:29:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:43.809 06:29:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:43.809 06:29:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:43.809 06:29:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:43.809 06:29:54 -- scripts/common.sh@335 -- # IFS=.-: 00:04:43.809 06:29:54 -- scripts/common.sh@335 -- # read -ra ver1 00:04:43.809 06:29:54 -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.809 06:29:54 -- scripts/common.sh@336 -- # read -ra ver2 00:04:43.809 06:29:54 -- scripts/common.sh@337 -- # local 'op=<' 00:04:43.809 06:29:54 -- scripts/common.sh@339 -- # ver1_l=2 00:04:43.809 06:29:54 -- scripts/common.sh@340 -- # ver2_l=1 00:04:43.809 06:29:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:43.809 06:29:54 -- scripts/common.sh@343 -- # case "$op" in 00:04:43.809 06:29:54 -- scripts/common.sh@344 -- # : 1 00:04:43.809 06:29:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:43.809 06:29:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.809 06:29:54 -- scripts/common.sh@364 -- # decimal 1 00:04:43.809 06:29:54 -- scripts/common.sh@352 -- # local d=1 00:04:43.809 06:29:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.809 06:29:54 -- scripts/common.sh@354 -- # echo 1 00:04:43.809 06:29:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:43.809 06:29:54 -- scripts/common.sh@365 -- # decimal 2 00:04:43.809 06:29:54 -- scripts/common.sh@352 -- # local d=2 00:04:43.809 06:29:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.809 06:29:54 -- scripts/common.sh@354 -- # echo 2 00:04:43.809 06:29:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:43.809 06:29:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:43.809 06:29:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:43.809 06:29:54 -- scripts/common.sh@367 -- # return 0 00:04:43.809 06:29:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.809 06:29:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:43.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.809 --rc genhtml_branch_coverage=1 00:04:43.809 --rc genhtml_function_coverage=1 00:04:43.809 --rc genhtml_legend=1 00:04:43.809 --rc geninfo_all_blocks=1 00:04:43.809 --rc geninfo_unexecuted_blocks=1 00:04:43.809 00:04:43.809 ' 00:04:43.809 06:29:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:43.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.809 --rc genhtml_branch_coverage=1 00:04:43.809 --rc genhtml_function_coverage=1 00:04:43.809 --rc genhtml_legend=1 00:04:43.809 --rc geninfo_all_blocks=1 00:04:43.809 --rc geninfo_unexecuted_blocks=1 00:04:43.809 00:04:43.809 ' 00:04:43.809 06:29:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:43.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.809 --rc genhtml_branch_coverage=1 00:04:43.809 --rc genhtml_function_coverage=1 00:04:43.809 --rc genhtml_legend=1 00:04:43.809 --rc geninfo_all_blocks=1 00:04:43.809 --rc geninfo_unexecuted_blocks=1 00:04:43.809 00:04:43.809 ' 00:04:43.809 06:29:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:43.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.809 --rc genhtml_branch_coverage=1 00:04:43.809 --rc genhtml_function_coverage=1 00:04:43.809 --rc genhtml_legend=1 00:04:43.809 --rc geninfo_all_blocks=1 00:04:43.809 --rc geninfo_unexecuted_blocks=1 00:04:43.809 00:04:43.809 ' 00:04:43.809 06:29:54 -- setup/driver.sh@68 -- # setup reset 00:04:43.809 06:29:54 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:43.809 06:29:54 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:50.414 06:30:00 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:50.414 06:30:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:50.414 06:30:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.414 06:30:00 -- common/autotest_common.sh@10 -- # set +x 00:04:50.414 ************************************ 00:04:50.414 START TEST guess_driver 00:04:50.414 ************************************ 00:04:50.414 06:30:00 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:50.414 06:30:00 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:50.414 06:30:00 -- setup/driver.sh@47 -- # local fail=0 00:04:50.414 06:30:00 -- setup/driver.sh@49 -- # pick_driver 00:04:50.414 06:30:00 -- setup/driver.sh@36 -- # vfio 00:04:50.414 06:30:00 -- setup/driver.sh@21 -- # local iommu_grups 00:04:50.414 06:30:00 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:50.414 06:30:00 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:50.414 06:30:00 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:50.414 06:30:00 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:50.414 06:30:00 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:50.414 06:30:00 -- setup/driver.sh@32 -- # return 1 00:04:50.414 06:30:00 -- setup/driver.sh@38 -- # uio 00:04:50.414 06:30:00 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:50.414 06:30:00 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:50.414 06:30:00 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:50.414 06:30:00 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:50.414 06:30:00 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:50.414 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:50.414 06:30:00 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:50.414 06:30:00 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:50.414 06:30:00 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:50.414 06:30:00 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:50.414 Looking for driver=uio_pci_generic 00:04:50.414 06:30:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.414 06:30:00 -- setup/driver.sh@45 -- # setup output config 00:04:50.414 06:30:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.414 06:30:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:50.676 06:30:01 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:50.676 06:30:01 -- setup/driver.sh@58 -- # continue 00:04:50.676 06:30:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.676 06:30:01 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.676 06:30:01 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:50.676 06:30:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.676 06:30:01 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.676 06:30:01 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:50.676 06:30:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.676 06:30:01 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.676 06:30:01 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:50.676 06:30:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.676 06:30:01 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.676 06:30:01 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:50.676 06:30:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.937 06:30:01 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:50.937 06:30:01 -- setup/driver.sh@65 -- # setup reset 00:04:50.937 06:30:01 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:50.937 06:30:01 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:57.557 00:04:57.557 real 0m6.897s 00:04:57.557 user 0m0.650s 00:04:57.557 sys 0m1.247s 00:04:57.557 06:30:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:57.557 ************************************ 00:04:57.557 END TEST guess_driver 00:04:57.557 ************************************ 00:04:57.557 06:30:07 -- common/autotest_common.sh@10 -- # set +x 00:04:57.557 ************************************ 00:04:57.557 END TEST driver 00:04:57.557 ************************************ 00:04:57.557 00:04:57.557 real 0m12.938s 00:04:57.557 user 0m1.002s 00:04:57.557 sys 0m1.980s 00:04:57.557 06:30:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:57.557 06:30:07 -- common/autotest_common.sh@10 -- # set +x 00:04:57.557 06:30:07 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:57.557 06:30:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:57.557 06:30:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.557 06:30:07 -- common/autotest_common.sh@10 -- # set +x 00:04:57.557 ************************************ 00:04:57.557 START TEST devices 00:04:57.557 ************************************ 00:04:57.557 06:30:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:57.557 * Looking for test storage... 00:04:57.557 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:57.557 06:30:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:57.557 06:30:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:57.557 06:30:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:57.557 06:30:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:57.557 06:30:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:57.557 06:30:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:57.557 06:30:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:57.557 06:30:07 -- scripts/common.sh@335 -- # IFS=.-: 00:04:57.558 06:30:07 -- scripts/common.sh@335 -- # read -ra ver1 00:04:57.558 06:30:07 -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.558 06:30:07 -- scripts/common.sh@336 -- # read -ra ver2 00:04:57.558 06:30:07 -- scripts/common.sh@337 -- # local 'op=<' 00:04:57.558 06:30:07 -- scripts/common.sh@339 -- # ver1_l=2 00:04:57.558 06:30:07 -- scripts/common.sh@340 -- # ver2_l=1 00:04:57.558 06:30:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:57.558 06:30:07 -- scripts/common.sh@343 -- # case "$op" in 00:04:57.558 06:30:07 -- scripts/common.sh@344 -- # : 1 00:04:57.558 06:30:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:57.558 06:30:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.558 06:30:07 -- scripts/common.sh@364 -- # decimal 1 00:04:57.558 06:30:07 -- scripts/common.sh@352 -- # local d=1 00:04:57.558 06:30:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.558 06:30:07 -- scripts/common.sh@354 -- # echo 1 00:04:57.558 06:30:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:57.558 06:30:07 -- scripts/common.sh@365 -- # decimal 2 00:04:57.558 06:30:07 -- scripts/common.sh@352 -- # local d=2 00:04:57.558 06:30:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.558 06:30:07 -- scripts/common.sh@354 -- # echo 2 00:04:57.558 06:30:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:57.558 06:30:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:57.558 06:30:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:57.558 06:30:07 -- scripts/common.sh@367 -- # return 0 00:04:57.558 06:30:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.558 06:30:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:57.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.558 --rc genhtml_branch_coverage=1 00:04:57.558 --rc genhtml_function_coverage=1 00:04:57.558 --rc genhtml_legend=1 00:04:57.558 --rc geninfo_all_blocks=1 00:04:57.558 --rc geninfo_unexecuted_blocks=1 00:04:57.558 00:04:57.558 ' 00:04:57.558 06:30:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:57.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.558 --rc genhtml_branch_coverage=1 00:04:57.558 --rc genhtml_function_coverage=1 00:04:57.558 --rc genhtml_legend=1 00:04:57.558 --rc geninfo_all_blocks=1 00:04:57.558 --rc geninfo_unexecuted_blocks=1 00:04:57.558 00:04:57.558 ' 00:04:57.558 06:30:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:57.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.558 --rc genhtml_branch_coverage=1 00:04:57.558 --rc genhtml_function_coverage=1 00:04:57.558 --rc genhtml_legend=1 00:04:57.558 --rc geninfo_all_blocks=1 00:04:57.558 --rc geninfo_unexecuted_blocks=1 00:04:57.558 00:04:57.558 ' 00:04:57.558 06:30:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:57.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.558 --rc genhtml_branch_coverage=1 00:04:57.558 --rc genhtml_function_coverage=1 00:04:57.558 --rc genhtml_legend=1 00:04:57.558 --rc geninfo_all_blocks=1 00:04:57.558 --rc geninfo_unexecuted_blocks=1 00:04:57.558 00:04:57.558 ' 00:04:57.558 06:30:07 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:57.558 06:30:07 -- setup/devices.sh@192 -- # setup reset 00:04:57.558 06:30:07 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:57.558 06:30:07 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:57.819 06:30:08 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:57.819 06:30:08 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:57.819 06:30:08 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:57.819 06:30:08 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:57.819 06:30:08 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:57.819 06:30:08 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:04:57.819 06:30:08 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:04:57.819 06:30:08 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:04:57.819 06:30:08 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:57.819 06:30:08 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:57.819 06:30:08 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:57.819 06:30:08 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:57.819 06:30:08 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:57.819 06:30:08 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:57.819 06:30:08 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:57.819 06:30:08 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:57.819 06:30:08 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:57.819 06:30:08 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:57.819 06:30:08 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:57.819 06:30:08 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:57.819 06:30:08 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:57.819 06:30:08 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:57.819 06:30:08 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:57.819 06:30:08 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:57.819 06:30:08 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:57.819 06:30:08 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:57.819 06:30:08 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:57.819 06:30:08 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:57.819 06:30:08 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:57.819 06:30:08 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:57.819 06:30:08 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:04:57.819 06:30:08 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:04:57.819 06:30:08 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:57.819 06:30:08 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:57.820 06:30:08 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:57.820 06:30:08 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:04:57.820 06:30:08 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:04:57.820 06:30:08 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:57.820 06:30:08 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:57.820 06:30:08 -- setup/devices.sh@196 -- # blocks=() 00:04:57.820 06:30:08 -- setup/devices.sh@196 -- # declare -a blocks 00:04:57.820 06:30:08 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:57.820 06:30:08 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:57.820 06:30:08 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:57.820 06:30:08 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:57.820 06:30:08 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:57.820 06:30:08 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:57.820 06:30:08 -- setup/devices.sh@202 -- # pci=0000:00:09.0 00:04:57.820 06:30:08 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:04:57.820 06:30:08 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:57.820 06:30:08 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:57.820 06:30:08 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:57.820 No valid GPT data, bailing 00:04:57.820 06:30:08 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:58.080 06:30:08 -- scripts/common.sh@393 -- # pt= 00:04:58.080 06:30:08 -- scripts/common.sh@394 -- # return 1 00:04:58.080 06:30:08 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:58.080 06:30:08 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:58.080 06:30:08 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:58.080 06:30:08 -- setup/common.sh@80 -- # echo 1073741824 00:04:58.080 06:30:08 -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:04:58.080 06:30:08 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:58.080 06:30:08 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:58.080 06:30:08 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:58.080 06:30:08 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:58.080 06:30:08 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:58.080 06:30:08 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:58.080 06:30:08 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:04:58.080 06:30:08 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:58.080 No valid GPT data, bailing 00:04:58.080 06:30:08 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:58.080 06:30:08 -- scripts/common.sh@393 -- # pt= 00:04:58.080 06:30:08 -- scripts/common.sh@394 -- # return 1 00:04:58.080 06:30:08 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:58.080 06:30:08 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:58.080 06:30:08 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:58.080 06:30:08 -- setup/common.sh@80 -- # echo 4294967296 00:04:58.080 06:30:08 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:58.080 06:30:08 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:58.080 06:30:08 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:58.080 06:30:08 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:58.080 06:30:08 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:04:58.080 06:30:08 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:58.080 06:30:08 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:58.080 06:30:08 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:58.080 06:30:08 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:04:58.080 06:30:08 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:04:58.080 06:30:08 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:04:58.080 No valid GPT data, bailing 00:04:58.080 06:30:08 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:58.080 06:30:08 -- scripts/common.sh@393 -- # pt= 00:04:58.080 06:30:08 -- scripts/common.sh@394 -- # return 1 00:04:58.080 06:30:08 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:04:58.080 06:30:08 -- setup/common.sh@76 -- # local dev=nvme1n2 00:04:58.080 06:30:08 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:04:58.080 06:30:08 -- setup/common.sh@80 -- # echo 4294967296 00:04:58.080 06:30:08 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:58.080 06:30:08 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:58.080 06:30:08 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:58.080 06:30:08 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:58.080 06:30:08 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:04:58.080 06:30:08 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:58.080 06:30:08 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:58.080 06:30:08 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:58.080 06:30:08 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:04:58.080 06:30:08 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:04:58.080 06:30:08 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:04:58.080 No valid GPT data, bailing 00:04:58.080 06:30:08 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:58.080 06:30:08 -- scripts/common.sh@393 -- # pt= 00:04:58.080 06:30:08 -- scripts/common.sh@394 -- # return 1 00:04:58.080 06:30:08 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:04:58.080 06:30:08 -- setup/common.sh@76 -- # local dev=nvme1n3 00:04:58.080 06:30:08 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:04:58.080 06:30:08 -- setup/common.sh@80 -- # echo 4294967296 00:04:58.080 06:30:08 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:58.080 06:30:08 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:58.080 06:30:08 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:58.080 06:30:08 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:58.080 06:30:08 -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:04:58.080 06:30:08 -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:58.080 06:30:08 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:58.080 06:30:08 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:58.080 06:30:08 -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:04:58.080 06:30:08 -- scripts/common.sh@380 -- # local block=nvme2n1 pt 00:04:58.080 06:30:08 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:04:58.342 No valid GPT data, bailing 00:04:58.342 06:30:08 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:58.342 06:30:08 -- scripts/common.sh@393 -- # pt= 00:04:58.342 06:30:08 -- scripts/common.sh@394 -- # return 1 00:04:58.342 06:30:08 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:04:58.342 06:30:08 -- setup/common.sh@76 -- # local dev=nvme2n1 00:04:58.342 06:30:08 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:04:58.342 06:30:08 -- setup/common.sh@80 -- # echo 6343335936 00:04:58.342 06:30:08 -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:04:58.342 06:30:08 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:58.342 06:30:08 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:58.342 06:30:08 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:58.342 06:30:08 -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:04:58.342 06:30:08 -- setup/devices.sh@201 -- # ctrl=nvme3 00:04:58.342 06:30:08 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:58.342 06:30:08 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:58.342 06:30:08 -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:04:58.342 06:30:08 -- scripts/common.sh@380 -- # local block=nvme3n1 pt 00:04:58.342 06:30:08 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:04:58.342 No valid GPT data, bailing 00:04:58.342 06:30:08 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:58.342 06:30:08 -- scripts/common.sh@393 -- # pt= 00:04:58.342 06:30:08 -- scripts/common.sh@394 -- # return 1 00:04:58.342 06:30:08 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:04:58.342 06:30:08 -- setup/common.sh@76 -- # local dev=nvme3n1 00:04:58.342 06:30:08 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:04:58.342 06:30:08 -- setup/common.sh@80 -- # echo 5368709120 00:04:58.342 06:30:08 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:58.342 06:30:08 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:58.342 06:30:08 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:58.342 06:30:08 -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:04:58.342 06:30:08 -- setup/devices.sh@211 -- # declare -r test_disk=nvme1n1 00:04:58.342 06:30:08 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:58.342 06:30:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:58.342 06:30:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.342 06:30:08 -- common/autotest_common.sh@10 -- # set +x 00:04:58.342 ************************************ 00:04:58.342 START TEST nvme_mount 00:04:58.342 ************************************ 00:04:58.342 06:30:08 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:58.342 06:30:08 -- setup/devices.sh@95 -- # nvme_disk=nvme1n1 00:04:58.342 06:30:08 -- setup/devices.sh@96 -- # nvme_disk_p=nvme1n1p1 00:04:58.342 06:30:08 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:58.342 06:30:08 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:58.342 06:30:08 -- setup/devices.sh@101 -- # partition_drive nvme1n1 1 00:04:58.342 06:30:08 -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:58.342 06:30:08 -- setup/common.sh@40 -- # local part_no=1 00:04:58.342 06:30:08 -- setup/common.sh@41 -- # local size=1073741824 00:04:58.342 06:30:08 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:58.342 06:30:08 -- setup/common.sh@44 -- # parts=() 00:04:58.342 06:30:08 -- setup/common.sh@44 -- # local parts 00:04:58.342 06:30:08 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:58.342 06:30:08 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:58.342 06:30:08 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:58.342 06:30:08 -- setup/common.sh@46 -- # (( part++ )) 00:04:58.342 06:30:08 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:58.342 06:30:08 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:58.342 06:30:08 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:58.342 06:30:08 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 00:04:59.284 Creating new GPT entries in memory. 00:04:59.284 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:59.284 other utilities. 00:04:59.284 06:30:10 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:59.284 06:30:10 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:59.284 06:30:10 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:59.284 06:30:10 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:59.284 06:30:10 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:05:00.670 Creating new GPT entries in memory. 00:05:00.671 The operation has completed successfully. 00:05:00.671 06:30:11 -- setup/common.sh@57 -- # (( part++ )) 00:05:00.671 06:30:11 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:00.671 06:30:11 -- setup/common.sh@62 -- # wait 65807 00:05:00.671 06:30:11 -- setup/devices.sh@102 -- # mkfs /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:00.671 06:30:11 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:00.671 06:30:11 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:00.671 06:30:11 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1p1 ]] 00:05:00.671 06:30:11 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1p1 00:05:00.671 06:30:11 -- setup/common.sh@72 -- # mount /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:00.671 06:30:11 -- setup/devices.sh@105 -- # verify 0000:00:08.0 nvme1n1:nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:00.671 06:30:11 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:00.671 06:30:11 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1p1 00:05:00.671 06:30:11 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:00.671 06:30:11 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:00.671 06:30:11 -- setup/devices.sh@53 -- # local found=0 00:05:00.671 06:30:11 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:00.671 06:30:11 -- setup/devices.sh@56 -- # : 00:05:00.671 06:30:11 -- setup/devices.sh@59 -- # local pci status 00:05:00.671 06:30:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.671 06:30:11 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:00.671 06:30:11 -- setup/devices.sh@47 -- # setup output config 00:05:00.671 06:30:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.671 06:30:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:00.671 06:30:11 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:00.671 06:30:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.671 06:30:11 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:00.671 06:30:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.932 06:30:11 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:00.932 06:30:11 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1\p\1* ]] 00:05:00.932 06:30:11 -- setup/devices.sh@63 -- # found=1 00:05:00.932 06:30:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.932 06:30:11 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:00.932 06:30:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.193 06:30:11 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:01.193 06:30:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.193 06:30:11 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:01.193 06:30:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.193 06:30:11 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:01.193 06:30:11 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:01.193 06:30:11 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:01.193 06:30:11 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:01.193 06:30:11 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:01.193 06:30:11 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:01.193 06:30:11 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:01.454 06:30:11 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:01.454 06:30:11 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:01.454 06:30:11 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:05:01.454 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:01.454 06:30:12 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:05:01.454 06:30:12 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:05:01.714 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:01.714 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:01.714 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:01.714 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:05:01.714 06:30:12 -- setup/devices.sh@113 -- # mkfs /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:01.714 06:30:12 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:01.714 06:30:12 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:01.714 06:30:12 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1 ]] 00:05:01.714 06:30:12 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1 1024M 00:05:01.714 06:30:12 -- setup/common.sh@72 -- # mount /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:01.714 06:30:12 -- setup/devices.sh@116 -- # verify 0000:00:08.0 nvme1n1:nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:01.714 06:30:12 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:01.714 06:30:12 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1 00:05:01.714 06:30:12 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:01.714 06:30:12 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:01.714 06:30:12 -- setup/devices.sh@53 -- # local found=0 00:05:01.714 06:30:12 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:01.714 06:30:12 -- setup/devices.sh@56 -- # : 00:05:01.714 06:30:12 -- setup/devices.sh@59 -- # local pci status 00:05:01.714 06:30:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.714 06:30:12 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:01.714 06:30:12 -- setup/devices.sh@47 -- # setup output config 00:05:01.714 06:30:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.714 06:30:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:01.975 06:30:12 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:01.975 06:30:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.975 06:30:12 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:01.975 06:30:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.235 06:30:12 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:02.235 06:30:12 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1* ]] 00:05:02.235 06:30:12 -- setup/devices.sh@63 -- # found=1 00:05:02.235 06:30:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.235 06:30:12 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:02.235 06:30:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.496 06:30:13 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:02.496 06:30:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.496 06:30:13 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:02.496 06:30:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.496 06:30:13 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:02.496 06:30:13 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:02.496 06:30:13 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:02.496 06:30:13 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:02.496 06:30:13 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:02.496 06:30:13 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:02.496 06:30:13 -- setup/devices.sh@125 -- # verify 0000:00:08.0 data@nvme1n1 '' '' 00:05:02.496 06:30:13 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:02.496 06:30:13 -- setup/devices.sh@49 -- # local mounts=data@nvme1n1 00:05:02.496 06:30:13 -- setup/devices.sh@50 -- # local mount_point= 00:05:02.496 06:30:13 -- setup/devices.sh@51 -- # local test_file= 00:05:02.496 06:30:13 -- setup/devices.sh@53 -- # local found=0 00:05:02.496 06:30:13 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:02.496 06:30:13 -- setup/devices.sh@59 -- # local pci status 00:05:02.496 06:30:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.496 06:30:13 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:02.496 06:30:13 -- setup/devices.sh@47 -- # setup output config 00:05:02.496 06:30:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.496 06:30:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:02.757 06:30:13 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:02.757 06:30:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.757 06:30:13 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:02.757 06:30:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.018 06:30:13 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:03.018 06:30:13 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\1\n\1* ]] 00:05:03.018 06:30:13 -- setup/devices.sh@63 -- # found=1 00:05:03.018 06:30:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.018 06:30:13 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:03.018 06:30:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.278 06:30:13 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:03.278 06:30:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.278 06:30:13 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:03.278 06:30:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.278 06:30:14 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:03.278 06:30:14 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:03.278 06:30:14 -- setup/devices.sh@68 -- # return 0 00:05:03.278 06:30:14 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:03.278 06:30:14 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:03.278 06:30:14 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:03.278 06:30:14 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:05:03.278 06:30:14 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:05:03.539 /dev/nvme1n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:03.539 00:05:03.539 real 0m5.061s 00:05:03.539 user 0m1.002s 00:05:03.539 sys 0m1.410s 00:05:03.539 06:30:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:03.539 06:30:14 -- common/autotest_common.sh@10 -- # set +x 00:05:03.539 ************************************ 00:05:03.539 END TEST nvme_mount 00:05:03.539 ************************************ 00:05:03.539 06:30:14 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:03.539 06:30:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:03.539 06:30:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:03.539 06:30:14 -- common/autotest_common.sh@10 -- # set +x 00:05:03.539 ************************************ 00:05:03.539 START TEST dm_mount 00:05:03.539 ************************************ 00:05:03.539 06:30:14 -- common/autotest_common.sh@1114 -- # dm_mount 00:05:03.539 06:30:14 -- setup/devices.sh@144 -- # pv=nvme1n1 00:05:03.539 06:30:14 -- setup/devices.sh@145 -- # pv0=nvme1n1p1 00:05:03.539 06:30:14 -- setup/devices.sh@146 -- # pv1=nvme1n1p2 00:05:03.539 06:30:14 -- setup/devices.sh@148 -- # partition_drive nvme1n1 00:05:03.539 06:30:14 -- setup/common.sh@39 -- # local disk=nvme1n1 00:05:03.539 06:30:14 -- setup/common.sh@40 -- # local part_no=2 00:05:03.539 06:30:14 -- setup/common.sh@41 -- # local size=1073741824 00:05:03.539 06:30:14 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:03.539 06:30:14 -- setup/common.sh@44 -- # parts=() 00:05:03.539 06:30:14 -- setup/common.sh@44 -- # local parts 00:05:03.539 06:30:14 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:03.539 06:30:14 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:03.539 06:30:14 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:03.539 06:30:14 -- setup/common.sh@46 -- # (( part++ )) 00:05:03.539 06:30:14 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:03.539 06:30:14 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:03.539 06:30:14 -- setup/common.sh@46 -- # (( part++ )) 00:05:03.539 06:30:14 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:03.539 06:30:14 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:03.539 06:30:14 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:05:03.539 06:30:14 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 nvme1n1p2 00:05:04.482 Creating new GPT entries in memory. 00:05:04.482 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:04.482 other utilities. 00:05:04.482 06:30:15 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:04.482 06:30:15 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:04.482 06:30:15 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:04.482 06:30:15 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:04.482 06:30:15 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:05:05.421 Creating new GPT entries in memory. 00:05:05.421 The operation has completed successfully. 00:05:05.421 06:30:16 -- setup/common.sh@57 -- # (( part++ )) 00:05:05.421 06:30:16 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:05.421 06:30:16 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:05.421 06:30:16 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:05.421 06:30:16 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=2:264192:526335 00:05:06.803 The operation has completed successfully. 00:05:06.803 06:30:17 -- setup/common.sh@57 -- # (( part++ )) 00:05:06.803 06:30:17 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:06.803 06:30:17 -- setup/common.sh@62 -- # wait 66435 00:05:06.803 06:30:17 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:06.803 06:30:17 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:06.803 06:30:17 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:06.803 06:30:17 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:06.803 06:30:17 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:06.803 06:30:17 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:06.803 06:30:17 -- setup/devices.sh@161 -- # break 00:05:06.803 06:30:17 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:06.803 06:30:17 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:06.803 06:30:17 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:06.803 06:30:17 -- setup/devices.sh@166 -- # dm=dm-0 00:05:06.803 06:30:17 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme1n1p1/holders/dm-0 ]] 00:05:06.803 06:30:17 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme1n1p2/holders/dm-0 ]] 00:05:06.803 06:30:17 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:06.803 06:30:17 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:06.803 06:30:17 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:06.803 06:30:17 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:06.803 06:30:17 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:06.803 06:30:17 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:06.803 06:30:17 -- setup/devices.sh@174 -- # verify 0000:00:08.0 nvme1n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:06.803 06:30:17 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:06.803 06:30:17 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme_dm_test 00:05:06.803 06:30:17 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:06.803 06:30:17 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:06.803 06:30:17 -- setup/devices.sh@53 -- # local found=0 00:05:06.803 06:30:17 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:06.803 06:30:17 -- setup/devices.sh@56 -- # : 00:05:06.803 06:30:17 -- setup/devices.sh@59 -- # local pci status 00:05:06.803 06:30:17 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:06.803 06:30:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.803 06:30:17 -- setup/devices.sh@47 -- # setup output config 00:05:06.803 06:30:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.803 06:30:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:06.803 06:30:17 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:06.803 06:30:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.803 06:30:17 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:06.803 06:30:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.063 06:30:17 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:07.063 06:30:17 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0,mount@nvme1n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:07.063 06:30:17 -- setup/devices.sh@63 -- # found=1 00:05:07.063 06:30:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.063 06:30:17 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:07.063 06:30:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.323 06:30:17 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:07.323 06:30:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.323 06:30:17 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:07.323 06:30:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.323 06:30:17 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:07.323 06:30:17 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:07.323 06:30:17 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:07.323 06:30:17 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:07.323 06:30:17 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:07.323 06:30:17 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:07.323 06:30:18 -- setup/devices.sh@184 -- # verify 0000:00:08.0 holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 '' '' 00:05:07.323 06:30:18 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:07.323 06:30:18 -- setup/devices.sh@49 -- # local mounts=holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 00:05:07.324 06:30:18 -- setup/devices.sh@50 -- # local mount_point= 00:05:07.324 06:30:18 -- setup/devices.sh@51 -- # local test_file= 00:05:07.324 06:30:18 -- setup/devices.sh@53 -- # local found=0 00:05:07.324 06:30:18 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:07.324 06:30:18 -- setup/devices.sh@59 -- # local pci status 00:05:07.324 06:30:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.324 06:30:18 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:07.324 06:30:18 -- setup/devices.sh@47 -- # setup output config 00:05:07.324 06:30:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.324 06:30:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:07.585 06:30:18 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:07.585 06:30:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.585 06:30:18 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:07.585 06:30:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.846 06:30:18 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:07.846 06:30:18 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\2\:\d\m\-\0* ]] 00:05:07.846 06:30:18 -- setup/devices.sh@63 -- # found=1 00:05:07.846 06:30:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.846 06:30:18 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:07.846 06:30:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.846 06:30:18 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:07.846 06:30:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.147 06:30:18 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:08.147 06:30:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.147 06:30:18 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:08.147 06:30:18 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:08.147 06:30:18 -- setup/devices.sh@68 -- # return 0 00:05:08.147 06:30:18 -- setup/devices.sh@187 -- # cleanup_dm 00:05:08.147 06:30:18 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:08.147 06:30:18 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:08.147 06:30:18 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:08.147 06:30:18 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:08.147 06:30:18 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme1n1p1 00:05:08.147 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:08.147 06:30:18 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:05:08.147 06:30:18 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme1n1p2 00:05:08.147 00:05:08.147 real 0m4.633s 00:05:08.147 user 0m0.625s 00:05:08.147 sys 0m0.867s 00:05:08.147 06:30:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:08.147 ************************************ 00:05:08.147 END TEST dm_mount 00:05:08.147 ************************************ 00:05:08.147 06:30:18 -- common/autotest_common.sh@10 -- # set +x 00:05:08.147 06:30:18 -- setup/devices.sh@1 -- # cleanup 00:05:08.147 06:30:18 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:08.147 06:30:18 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:08.147 06:30:18 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:08.147 06:30:18 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:05:08.147 06:30:18 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:05:08.147 06:30:18 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:05:08.427 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:08.427 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:08.427 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:08.427 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:05:08.427 06:30:19 -- setup/devices.sh@12 -- # cleanup_dm 00:05:08.427 06:30:19 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:08.427 06:30:19 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:08.427 06:30:19 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:08.427 06:30:19 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:05:08.427 06:30:19 -- setup/devices.sh@14 -- # [[ -b /dev/nvme1n1 ]] 00:05:08.427 06:30:19 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme1n1 00:05:08.427 00:05:08.427 real 0m11.683s 00:05:08.427 user 0m2.411s 00:05:08.427 sys 0m3.021s 00:05:08.427 06:30:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:08.427 ************************************ 00:05:08.427 END TEST devices 00:05:08.427 ************************************ 00:05:08.427 06:30:19 -- common/autotest_common.sh@10 -- # set +x 00:05:08.427 00:05:08.427 real 0m40.734s 00:05:08.427 user 0m7.791s 00:05:08.427 sys 0m11.337s 00:05:08.427 06:30:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:08.427 ************************************ 00:05:08.427 06:30:19 -- common/autotest_common.sh@10 -- # set +x 00:05:08.427 END TEST setup.sh 00:05:08.427 ************************************ 00:05:08.427 06:30:19 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:08.688 Hugepages 00:05:08.688 node hugesize free / total 00:05:08.688 node0 1048576kB 0 / 0 00:05:08.688 node0 2048kB 2048 / 2048 00:05:08.688 00:05:08.688 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:08.688 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:08.688 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:05:08.950 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:08.950 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:08.950 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:08.950 06:30:19 -- spdk/autotest.sh@128 -- # uname -s 00:05:08.950 06:30:19 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:05:08.950 06:30:19 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:05:08.950 06:30:19 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:09.891 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:09.891 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:09.891 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:05:09.891 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:09.891 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:05:09.891 06:30:20 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:11.277 06:30:21 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:11.277 06:30:21 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:11.277 06:30:21 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:11.277 06:30:21 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:11.277 06:30:21 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:11.277 06:30:21 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:11.277 06:30:21 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:11.277 06:30:21 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:11.277 06:30:21 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:11.277 06:30:21 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:05:11.277 06:30:21 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:05:11.277 06:30:21 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:11.539 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:11.539 Waiting for block devices as requested 00:05:11.539 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:05:11.539 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:05:11.801 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:11.801 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:17.096 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:05:17.096 06:30:27 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:17.096 06:30:27 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:17.096 06:30:27 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:05:17.096 06:30:27 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:17.096 06:30:27 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:05:17.096 06:30:27 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 ]] 00:05:17.096 06:30:27 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:05:17.096 06:30:27 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme2 00:05:17.096 06:30:27 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme2 00:05:17.096 06:30:27 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme2 ]] 00:05:17.096 06:30:27 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:17.096 06:30:27 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:17.096 06:30:27 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:17.096 06:30:27 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:17.096 06:30:27 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:17.096 06:30:27 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:17.096 06:30:27 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme2 00:05:17.096 06:30:27 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:17.096 06:30:27 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:17.096 06:30:27 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:17.096 06:30:27 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:17.096 06:30:27 -- common/autotest_common.sh@1552 -- # continue 00:05:17.096 06:30:27 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:17.096 06:30:27 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:17.096 06:30:27 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:17.096 06:30:27 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:05:17.096 06:30:27 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:05:17.096 06:30:27 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 ]] 00:05:17.096 06:30:27 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:05:17.096 06:30:27 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme3 00:05:17.096 06:30:27 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme3 00:05:17.096 06:30:27 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme3 ]] 00:05:17.096 06:30:27 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:17.096 06:30:27 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:17.096 06:30:27 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:17.096 06:30:27 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:17.096 06:30:27 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:17.096 06:30:27 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:17.096 06:30:27 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme3 00:05:17.096 06:30:27 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:17.096 06:30:27 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:17.096 06:30:27 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:17.096 06:30:27 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:17.096 06:30:27 -- common/autotest_common.sh@1552 -- # continue 00:05:17.096 06:30:27 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:17.096 06:30:27 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:08.0 00:05:17.096 06:30:27 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:17.096 06:30:27 -- common/autotest_common.sh@1497 -- # grep 0000:00:08.0/nvme/nvme 00:05:17.096 06:30:27 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:05:17.096 06:30:27 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 ]] 00:05:17.096 06:30:27 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:05:17.096 06:30:27 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:05:17.096 06:30:27 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:05:17.096 06:30:27 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:05:17.096 06:30:27 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:17.096 06:30:27 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:17.096 06:30:27 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:17.096 06:30:27 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:17.096 06:30:27 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:17.096 06:30:27 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:17.096 06:30:27 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:05:17.096 06:30:27 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:17.096 06:30:27 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:17.096 06:30:27 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:17.096 06:30:27 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:17.096 06:30:27 -- common/autotest_common.sh@1552 -- # continue 00:05:17.096 06:30:27 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:17.096 06:30:27 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:09.0 00:05:17.096 06:30:27 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:17.096 06:30:27 -- common/autotest_common.sh@1497 -- # grep 0000:00:09.0/nvme/nvme 00:05:17.096 06:30:27 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:05:17.096 06:30:27 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 ]] 00:05:17.096 06:30:27 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:05:17.096 06:30:27 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:17.096 06:30:27 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:17.096 06:30:27 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:17.096 06:30:27 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:17.096 06:30:27 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:17.096 06:30:27 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:17.096 06:30:27 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:17.096 06:30:27 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:17.096 06:30:27 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:17.097 06:30:27 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:17.097 06:30:27 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:17.097 06:30:27 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:17.097 06:30:27 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:17.097 06:30:27 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:17.097 06:30:27 -- common/autotest_common.sh@1552 -- # continue 00:05:17.097 06:30:27 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:17.097 06:30:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:17.097 06:30:27 -- common/autotest_common.sh@10 -- # set +x 00:05:17.097 06:30:27 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:17.097 06:30:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:17.097 06:30:27 -- common/autotest_common.sh@10 -- # set +x 00:05:17.097 06:30:27 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:18.040 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:18.040 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:18.040 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:05:18.040 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:18.303 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:05:18.303 06:30:28 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:18.303 06:30:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:18.303 06:30:28 -- common/autotest_common.sh@10 -- # set +x 00:05:18.303 06:30:28 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:18.303 06:30:28 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:18.303 06:30:28 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:18.303 06:30:28 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:18.303 06:30:28 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:18.303 06:30:28 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:18.303 06:30:28 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:18.303 06:30:28 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:18.303 06:30:28 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:18.303 06:30:28 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:18.303 06:30:28 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:18.303 06:30:29 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:05:18.303 06:30:29 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:05:18.303 06:30:29 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:18.303 06:30:29 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:18.303 06:30:29 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:18.303 06:30:29 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:18.303 06:30:29 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:18.303 06:30:29 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:18.303 06:30:29 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:18.303 06:30:29 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:18.303 06:30:29 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:18.303 06:30:29 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:08.0/device 00:05:18.303 06:30:29 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:18.303 06:30:29 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:18.303 06:30:29 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:18.303 06:30:29 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:09.0/device 00:05:18.303 06:30:29 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:18.303 06:30:29 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:18.303 06:30:29 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:05:18.303 06:30:29 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:05:18.303 06:30:29 -- common/autotest_common.sh@1588 -- # return 0 00:05:18.303 06:30:29 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:18.303 06:30:29 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:18.303 06:30:29 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:18.303 06:30:29 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:18.303 06:30:29 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:18.303 06:30:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:18.303 06:30:29 -- common/autotest_common.sh@10 -- # set +x 00:05:18.303 06:30:29 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:18.303 06:30:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.303 06:30:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.303 06:30:29 -- common/autotest_common.sh@10 -- # set +x 00:05:18.565 ************************************ 00:05:18.565 START TEST env 00:05:18.565 ************************************ 00:05:18.565 06:30:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:18.565 * Looking for test storage... 00:05:18.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:18.565 06:30:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:18.565 06:30:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:18.565 06:30:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:18.565 06:30:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:18.565 06:30:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:18.565 06:30:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:18.565 06:30:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:18.565 06:30:29 -- scripts/common.sh@335 -- # IFS=.-: 00:05:18.565 06:30:29 -- scripts/common.sh@335 -- # read -ra ver1 00:05:18.565 06:30:29 -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.565 06:30:29 -- scripts/common.sh@336 -- # read -ra ver2 00:05:18.565 06:30:29 -- scripts/common.sh@337 -- # local 'op=<' 00:05:18.565 06:30:29 -- scripts/common.sh@339 -- # ver1_l=2 00:05:18.565 06:30:29 -- scripts/common.sh@340 -- # ver2_l=1 00:05:18.565 06:30:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:18.565 06:30:29 -- scripts/common.sh@343 -- # case "$op" in 00:05:18.565 06:30:29 -- scripts/common.sh@344 -- # : 1 00:05:18.565 06:30:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:18.565 06:30:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.565 06:30:29 -- scripts/common.sh@364 -- # decimal 1 00:05:18.565 06:30:29 -- scripts/common.sh@352 -- # local d=1 00:05:18.565 06:30:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.565 06:30:29 -- scripts/common.sh@354 -- # echo 1 00:05:18.565 06:30:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:18.565 06:30:29 -- scripts/common.sh@365 -- # decimal 2 00:05:18.565 06:30:29 -- scripts/common.sh@352 -- # local d=2 00:05:18.565 06:30:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.565 06:30:29 -- scripts/common.sh@354 -- # echo 2 00:05:18.565 06:30:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:18.565 06:30:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:18.565 06:30:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:18.565 06:30:29 -- scripts/common.sh@367 -- # return 0 00:05:18.565 06:30:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.565 06:30:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:18.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.565 --rc genhtml_branch_coverage=1 00:05:18.565 --rc genhtml_function_coverage=1 00:05:18.565 --rc genhtml_legend=1 00:05:18.565 --rc geninfo_all_blocks=1 00:05:18.565 --rc geninfo_unexecuted_blocks=1 00:05:18.565 00:05:18.565 ' 00:05:18.565 06:30:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:18.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.565 --rc genhtml_branch_coverage=1 00:05:18.565 --rc genhtml_function_coverage=1 00:05:18.565 --rc genhtml_legend=1 00:05:18.565 --rc geninfo_all_blocks=1 00:05:18.565 --rc geninfo_unexecuted_blocks=1 00:05:18.565 00:05:18.565 ' 00:05:18.565 06:30:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:18.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.565 --rc genhtml_branch_coverage=1 00:05:18.565 --rc genhtml_function_coverage=1 00:05:18.565 --rc genhtml_legend=1 00:05:18.565 --rc geninfo_all_blocks=1 00:05:18.565 --rc geninfo_unexecuted_blocks=1 00:05:18.565 00:05:18.565 ' 00:05:18.565 06:30:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:18.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.565 --rc genhtml_branch_coverage=1 00:05:18.565 --rc genhtml_function_coverage=1 00:05:18.565 --rc genhtml_legend=1 00:05:18.565 --rc geninfo_all_blocks=1 00:05:18.565 --rc geninfo_unexecuted_blocks=1 00:05:18.565 00:05:18.565 ' 00:05:18.565 06:30:29 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:18.565 06:30:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.565 06:30:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.565 06:30:29 -- common/autotest_common.sh@10 -- # set +x 00:05:18.565 ************************************ 00:05:18.565 START TEST env_memory 00:05:18.565 ************************************ 00:05:18.565 06:30:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:18.565 00:05:18.565 00:05:18.565 CUnit - A unit testing framework for C - Version 2.1-3 00:05:18.565 http://cunit.sourceforge.net/ 00:05:18.565 00:05:18.565 00:05:18.565 Suite: memory 00:05:18.565 Test: alloc and free memory map ...[2024-11-28 06:30:29.310018] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:18.827 passed 00:05:18.827 Test: mem map translation ...[2024-11-28 06:30:29.349286] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:18.827 [2024-11-28 06:30:29.349432] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:18.827 [2024-11-28 06:30:29.349550] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:18.827 [2024-11-28 06:30:29.349592] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:18.827 passed 00:05:18.827 Test: mem map registration ...[2024-11-28 06:30:29.419617] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:18.827 [2024-11-28 06:30:29.419776] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:18.827 passed 00:05:18.827 Test: mem map adjacent registrations ...passed 00:05:18.827 00:05:18.827 Run Summary: Type Total Ran Passed Failed Inactive 00:05:18.827 suites 1 1 n/a 0 0 00:05:18.827 tests 4 4 4 0 0 00:05:18.827 asserts 152 152 152 0 n/a 00:05:18.827 00:05:18.827 Elapsed time = 0.236 seconds 00:05:18.827 00:05:18.827 real 0m0.274s 00:05:18.827 user 0m0.241s 00:05:18.827 sys 0m0.022s 00:05:18.827 06:30:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:18.827 06:30:29 -- common/autotest_common.sh@10 -- # set +x 00:05:18.827 ************************************ 00:05:18.827 END TEST env_memory 00:05:18.827 ************************************ 00:05:18.827 06:30:29 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:18.827 06:30:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.827 06:30:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.827 06:30:29 -- common/autotest_common.sh@10 -- # set +x 00:05:18.827 ************************************ 00:05:18.827 START TEST env_vtophys 00:05:18.827 ************************************ 00:05:18.827 06:30:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:19.089 EAL: lib.eal log level changed from notice to debug 00:05:19.089 EAL: Detected lcore 0 as core 0 on socket 0 00:05:19.089 EAL: Detected lcore 1 as core 0 on socket 0 00:05:19.089 EAL: Detected lcore 2 as core 0 on socket 0 00:05:19.089 EAL: Detected lcore 3 as core 0 on socket 0 00:05:19.089 EAL: Detected lcore 4 as core 0 on socket 0 00:05:19.089 EAL: Detected lcore 5 as core 0 on socket 0 00:05:19.089 EAL: Detected lcore 6 as core 0 on socket 0 00:05:19.089 EAL: Detected lcore 7 as core 0 on socket 0 00:05:19.089 EAL: Detected lcore 8 as core 0 on socket 0 00:05:19.089 EAL: Detected lcore 9 as core 0 on socket 0 00:05:19.089 EAL: Maximum logical cores by configuration: 128 00:05:19.089 EAL: Detected CPU lcores: 10 00:05:19.089 EAL: Detected NUMA nodes: 1 00:05:19.089 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:19.089 EAL: Detected shared linkage of DPDK 00:05:19.089 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:19.089 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:19.089 EAL: Registered [vdev] bus. 00:05:19.089 EAL: bus.vdev log level changed from disabled to notice 00:05:19.089 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:19.089 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:19.089 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:19.089 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:19.089 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:19.089 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:19.089 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:19.089 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:19.089 EAL: No shared files mode enabled, IPC will be disabled 00:05:19.089 EAL: No shared files mode enabled, IPC is disabled 00:05:19.089 EAL: Selected IOVA mode 'PA' 00:05:19.089 EAL: Probing VFIO support... 00:05:19.089 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:19.089 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:19.089 EAL: Ask a virtual area of 0x2e000 bytes 00:05:19.089 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:19.089 EAL: Setting up physically contiguous memory... 00:05:19.089 EAL: Setting maximum number of open files to 524288 00:05:19.089 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:19.089 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:19.089 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.089 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:19.089 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.089 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.089 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:19.089 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:19.089 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.089 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:19.089 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.089 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.089 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:19.089 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:19.089 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.089 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:19.089 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.089 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.089 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:19.089 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:19.089 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.089 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:19.089 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.089 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.089 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:19.089 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:19.089 EAL: Hugepages will be freed exactly as allocated. 00:05:19.089 EAL: No shared files mode enabled, IPC is disabled 00:05:19.089 EAL: No shared files mode enabled, IPC is disabled 00:05:19.089 EAL: TSC frequency is ~2600000 KHz 00:05:19.089 EAL: Main lcore 0 is ready (tid=7fc106e20a40;cpuset=[0]) 00:05:19.089 EAL: Trying to obtain current memory policy. 00:05:19.089 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.089 EAL: Restoring previous memory policy: 0 00:05:19.089 EAL: request: mp_malloc_sync 00:05:19.089 EAL: No shared files mode enabled, IPC is disabled 00:05:19.089 EAL: Heap on socket 0 was expanded by 2MB 00:05:19.089 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:19.089 EAL: No shared files mode enabled, IPC is disabled 00:05:19.089 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:19.089 EAL: Mem event callback 'spdk:(nil)' registered 00:05:19.089 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:19.089 00:05:19.089 00:05:19.089 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.089 http://cunit.sourceforge.net/ 00:05:19.089 00:05:19.089 00:05:19.089 Suite: components_suite 00:05:19.661 Test: vtophys_malloc_test ...passed 00:05:19.661 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:19.661 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.661 EAL: Restoring previous memory policy: 4 00:05:19.661 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.661 EAL: request: mp_malloc_sync 00:05:19.661 EAL: No shared files mode enabled, IPC is disabled 00:05:19.661 EAL: Heap on socket 0 was expanded by 4MB 00:05:19.661 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.661 EAL: request: mp_malloc_sync 00:05:19.661 EAL: No shared files mode enabled, IPC is disabled 00:05:19.661 EAL: Heap on socket 0 was shrunk by 4MB 00:05:19.661 EAL: Trying to obtain current memory policy. 00:05:19.661 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.661 EAL: Restoring previous memory policy: 4 00:05:19.661 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.661 EAL: request: mp_malloc_sync 00:05:19.661 EAL: No shared files mode enabled, IPC is disabled 00:05:19.661 EAL: Heap on socket 0 was expanded by 6MB 00:05:19.661 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.661 EAL: request: mp_malloc_sync 00:05:19.661 EAL: No shared files mode enabled, IPC is disabled 00:05:19.661 EAL: Heap on socket 0 was shrunk by 6MB 00:05:19.661 EAL: Trying to obtain current memory policy. 00:05:19.661 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.661 EAL: Restoring previous memory policy: 4 00:05:19.661 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.661 EAL: request: mp_malloc_sync 00:05:19.661 EAL: No shared files mode enabled, IPC is disabled 00:05:19.661 EAL: Heap on socket 0 was expanded by 10MB 00:05:19.661 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.661 EAL: request: mp_malloc_sync 00:05:19.661 EAL: No shared files mode enabled, IPC is disabled 00:05:19.661 EAL: Heap on socket 0 was shrunk by 10MB 00:05:19.661 EAL: Trying to obtain current memory policy. 00:05:19.661 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.661 EAL: Restoring previous memory policy: 4 00:05:19.661 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.661 EAL: request: mp_malloc_sync 00:05:19.661 EAL: No shared files mode enabled, IPC is disabled 00:05:19.661 EAL: Heap on socket 0 was expanded by 18MB 00:05:19.661 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.661 EAL: request: mp_malloc_sync 00:05:19.661 EAL: No shared files mode enabled, IPC is disabled 00:05:19.661 EAL: Heap on socket 0 was shrunk by 18MB 00:05:19.661 EAL: Trying to obtain current memory policy. 00:05:19.661 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.661 EAL: Restoring previous memory policy: 4 00:05:19.661 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.661 EAL: request: mp_malloc_sync 00:05:19.661 EAL: No shared files mode enabled, IPC is disabled 00:05:19.661 EAL: Heap on socket 0 was expanded by 34MB 00:05:19.661 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.661 EAL: request: mp_malloc_sync 00:05:19.661 EAL: No shared files mode enabled, IPC is disabled 00:05:19.661 EAL: Heap on socket 0 was shrunk by 34MB 00:05:19.661 EAL: Trying to obtain current memory policy. 00:05:19.661 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.661 EAL: Restoring previous memory policy: 4 00:05:19.661 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.662 EAL: request: mp_malloc_sync 00:05:19.662 EAL: No shared files mode enabled, IPC is disabled 00:05:19.662 EAL: Heap on socket 0 was expanded by 66MB 00:05:19.662 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.662 EAL: request: mp_malloc_sync 00:05:19.662 EAL: No shared files mode enabled, IPC is disabled 00:05:19.662 EAL: Heap on socket 0 was shrunk by 66MB 00:05:19.662 EAL: Trying to obtain current memory policy. 00:05:19.662 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.662 EAL: Restoring previous memory policy: 4 00:05:19.662 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.662 EAL: request: mp_malloc_sync 00:05:19.662 EAL: No shared files mode enabled, IPC is disabled 00:05:19.662 EAL: Heap on socket 0 was expanded by 130MB 00:05:19.662 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.662 EAL: request: mp_malloc_sync 00:05:19.662 EAL: No shared files mode enabled, IPC is disabled 00:05:19.662 EAL: Heap on socket 0 was shrunk by 130MB 00:05:19.662 EAL: Trying to obtain current memory policy. 00:05:19.662 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.921 EAL: Restoring previous memory policy: 4 00:05:19.921 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.921 EAL: request: mp_malloc_sync 00:05:19.921 EAL: No shared files mode enabled, IPC is disabled 00:05:19.921 EAL: Heap on socket 0 was expanded by 258MB 00:05:19.921 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.921 EAL: request: mp_malloc_sync 00:05:19.921 EAL: No shared files mode enabled, IPC is disabled 00:05:19.921 EAL: Heap on socket 0 was shrunk by 258MB 00:05:19.921 EAL: Trying to obtain current memory policy. 00:05:19.921 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.221 EAL: Restoring previous memory policy: 4 00:05:20.221 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.221 EAL: request: mp_malloc_sync 00:05:20.221 EAL: No shared files mode enabled, IPC is disabled 00:05:20.221 EAL: Heap on socket 0 was expanded by 514MB 00:05:20.221 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.508 EAL: request: mp_malloc_sync 00:05:20.508 EAL: No shared files mode enabled, IPC is disabled 00:05:20.508 EAL: Heap on socket 0 was shrunk by 514MB 00:05:20.508 EAL: Trying to obtain current memory policy. 00:05:20.508 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.768 EAL: Restoring previous memory policy: 4 00:05:20.768 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.768 EAL: request: mp_malloc_sync 00:05:20.768 EAL: No shared files mode enabled, IPC is disabled 00:05:20.768 EAL: Heap on socket 0 was expanded by 1026MB 00:05:21.029 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.291 passed 00:05:21.291 00:05:21.291 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.291 suites 1 1 n/a 0 0 00:05:21.291 tests 2 2 2 0 0 00:05:21.291 asserts 5470 5470 5470 0 n/a 00:05:21.291 00:05:21.291 Elapsed time = 2.034 seconds 00:05:21.291 EAL: request: mp_malloc_sync 00:05:21.291 EAL: No shared files mode enabled, IPC is disabled 00:05:21.291 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:21.291 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.291 EAL: request: mp_malloc_sync 00:05:21.291 EAL: No shared files mode enabled, IPC is disabled 00:05:21.291 EAL: Heap on socket 0 was shrunk by 2MB 00:05:21.291 EAL: No shared files mode enabled, IPC is disabled 00:05:21.291 EAL: No shared files mode enabled, IPC is disabled 00:05:21.291 EAL: No shared files mode enabled, IPC is disabled 00:05:21.291 ************************************ 00:05:21.291 END TEST env_vtophys 00:05:21.291 ************************************ 00:05:21.291 00:05:21.291 real 0m2.280s 00:05:21.291 user 0m1.019s 00:05:21.291 sys 0m1.100s 00:05:21.291 06:30:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:21.291 06:30:31 -- common/autotest_common.sh@10 -- # set +x 00:05:21.291 06:30:31 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:21.291 06:30:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:21.291 06:30:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.291 06:30:31 -- common/autotest_common.sh@10 -- # set +x 00:05:21.291 ************************************ 00:05:21.291 START TEST env_pci 00:05:21.291 ************************************ 00:05:21.291 06:30:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:21.291 00:05:21.291 00:05:21.291 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.291 http://cunit.sourceforge.net/ 00:05:21.291 00:05:21.291 00:05:21.291 Suite: pci 00:05:21.291 Test: pci_hook ...[2024-11-28 06:30:31.962749] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68115 has claimed it 00:05:21.291 passed 00:05:21.291 00:05:21.291 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.291 suites 1 1 n/a 0 0 00:05:21.291 tests 1 1 1 0 0 00:05:21.291 asserts 25 25 25 0 n/a 00:05:21.291 00:05:21.291 Elapsed time = 0.007 seconds 00:05:21.291 EAL: Cannot find device (10000:00:01.0) 00:05:21.291 EAL: Failed to attach device on primary process 00:05:21.291 00:05:21.291 real 0m0.056s 00:05:21.291 user 0m0.020s 00:05:21.291 sys 0m0.035s 00:05:21.291 06:30:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:21.291 ************************************ 00:05:21.291 END TEST env_pci 00:05:21.291 ************************************ 00:05:21.291 06:30:31 -- common/autotest_common.sh@10 -- # set +x 00:05:21.291 06:30:32 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:21.291 06:30:32 -- env/env.sh@15 -- # uname 00:05:21.291 06:30:32 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:21.291 06:30:32 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:21.291 06:30:32 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:21.291 06:30:32 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:21.291 06:30:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.291 06:30:32 -- common/autotest_common.sh@10 -- # set +x 00:05:21.553 ************************************ 00:05:21.553 START TEST env_dpdk_post_init 00:05:21.553 ************************************ 00:05:21.553 06:30:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:21.553 EAL: Detected CPU lcores: 10 00:05:21.553 EAL: Detected NUMA nodes: 1 00:05:21.553 EAL: Detected shared linkage of DPDK 00:05:21.553 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:21.553 EAL: Selected IOVA mode 'PA' 00:05:21.553 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:21.553 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:21.553 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:21.553 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:08.0 (socket -1) 00:05:21.553 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:09.0 (socket -1) 00:05:21.553 Starting DPDK initialization... 00:05:21.553 Starting SPDK post initialization... 00:05:21.553 SPDK NVMe probe 00:05:21.553 Attaching to 0000:00:06.0 00:05:21.553 Attaching to 0000:00:07.0 00:05:21.553 Attaching to 0000:00:08.0 00:05:21.553 Attaching to 0000:00:09.0 00:05:21.553 Attached to 0000:00:07.0 00:05:21.553 Attached to 0000:00:09.0 00:05:21.553 Attached to 0000:00:06.0 00:05:21.553 Attached to 0000:00:08.0 00:05:21.553 Cleaning up... 00:05:21.553 00:05:21.553 real 0m0.225s 00:05:21.553 user 0m0.052s 00:05:21.553 sys 0m0.073s 00:05:21.553 ************************************ 00:05:21.553 END TEST env_dpdk_post_init 00:05:21.553 06:30:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:21.553 06:30:32 -- common/autotest_common.sh@10 -- # set +x 00:05:21.553 ************************************ 00:05:21.813 06:30:32 -- env/env.sh@26 -- # uname 00:05:21.813 06:30:32 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:21.813 06:30:32 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:21.813 06:30:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:21.813 06:30:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.813 06:30:32 -- common/autotest_common.sh@10 -- # set +x 00:05:21.813 ************************************ 00:05:21.813 START TEST env_mem_callbacks 00:05:21.813 ************************************ 00:05:21.813 06:30:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:21.813 EAL: Detected CPU lcores: 10 00:05:21.813 EAL: Detected NUMA nodes: 1 00:05:21.813 EAL: Detected shared linkage of DPDK 00:05:21.813 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:21.813 EAL: Selected IOVA mode 'PA' 00:05:21.813 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:21.813 00:05:21.813 00:05:21.813 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.813 http://cunit.sourceforge.net/ 00:05:21.813 00:05:21.813 00:05:21.813 Suite: memory 00:05:21.813 Test: test ... 00:05:21.813 register 0x200000200000 2097152 00:05:21.813 malloc 3145728 00:05:21.813 register 0x200000400000 4194304 00:05:21.813 buf 0x200000500000 len 3145728 PASSED 00:05:21.813 malloc 64 00:05:21.813 buf 0x2000004fff40 len 64 PASSED 00:05:21.813 malloc 4194304 00:05:21.813 register 0x200000800000 6291456 00:05:21.813 buf 0x200000a00000 len 4194304 PASSED 00:05:21.813 free 0x200000500000 3145728 00:05:21.813 free 0x2000004fff40 64 00:05:21.813 unregister 0x200000400000 4194304 PASSED 00:05:21.813 free 0x200000a00000 4194304 00:05:21.813 unregister 0x200000800000 6291456 PASSED 00:05:21.813 malloc 8388608 00:05:21.813 register 0x200000400000 10485760 00:05:21.813 buf 0x200000600000 len 8388608 PASSED 00:05:21.813 free 0x200000600000 8388608 00:05:21.813 unregister 0x200000400000 10485760 PASSED 00:05:21.813 passed 00:05:21.813 00:05:21.813 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.813 suites 1 1 n/a 0 0 00:05:21.813 tests 1 1 1 0 0 00:05:21.813 asserts 15 15 15 0 n/a 00:05:21.813 00:05:21.813 Elapsed time = 0.014 seconds 00:05:21.813 00:05:21.813 real 0m0.173s 00:05:21.813 user 0m0.023s 00:05:21.813 sys 0m0.043s 00:05:21.813 06:30:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:21.813 ************************************ 00:05:21.813 END TEST env_mem_callbacks 00:05:21.813 ************************************ 00:05:21.813 06:30:32 -- common/autotest_common.sh@10 -- # set +x 00:05:22.075 00:05:22.075 real 0m3.511s 00:05:22.075 user 0m1.526s 00:05:22.075 sys 0m1.514s 00:05:22.075 06:30:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:22.075 06:30:32 -- common/autotest_common.sh@10 -- # set +x 00:05:22.075 ************************************ 00:05:22.075 END TEST env 00:05:22.075 ************************************ 00:05:22.075 06:30:32 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:22.075 06:30:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:22.075 06:30:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.075 06:30:32 -- common/autotest_common.sh@10 -- # set +x 00:05:22.075 ************************************ 00:05:22.075 START TEST rpc 00:05:22.075 ************************************ 00:05:22.075 06:30:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:22.075 * Looking for test storage... 00:05:22.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:22.075 06:30:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:22.075 06:30:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:22.075 06:30:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:22.076 06:30:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:22.076 06:30:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:22.076 06:30:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:22.076 06:30:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:22.076 06:30:32 -- scripts/common.sh@335 -- # IFS=.-: 00:05:22.076 06:30:32 -- scripts/common.sh@335 -- # read -ra ver1 00:05:22.076 06:30:32 -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.076 06:30:32 -- scripts/common.sh@336 -- # read -ra ver2 00:05:22.076 06:30:32 -- scripts/common.sh@337 -- # local 'op=<' 00:05:22.076 06:30:32 -- scripts/common.sh@339 -- # ver1_l=2 00:05:22.076 06:30:32 -- scripts/common.sh@340 -- # ver2_l=1 00:05:22.076 06:30:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:22.076 06:30:32 -- scripts/common.sh@343 -- # case "$op" in 00:05:22.076 06:30:32 -- scripts/common.sh@344 -- # : 1 00:05:22.076 06:30:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:22.076 06:30:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.076 06:30:32 -- scripts/common.sh@364 -- # decimal 1 00:05:22.076 06:30:32 -- scripts/common.sh@352 -- # local d=1 00:05:22.076 06:30:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.076 06:30:32 -- scripts/common.sh@354 -- # echo 1 00:05:22.076 06:30:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:22.076 06:30:32 -- scripts/common.sh@365 -- # decimal 2 00:05:22.076 06:30:32 -- scripts/common.sh@352 -- # local d=2 00:05:22.076 06:30:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.076 06:30:32 -- scripts/common.sh@354 -- # echo 2 00:05:22.076 06:30:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:22.076 06:30:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:22.076 06:30:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:22.076 06:30:32 -- scripts/common.sh@367 -- # return 0 00:05:22.076 06:30:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.076 06:30:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:22.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.076 --rc genhtml_branch_coverage=1 00:05:22.076 --rc genhtml_function_coverage=1 00:05:22.076 --rc genhtml_legend=1 00:05:22.076 --rc geninfo_all_blocks=1 00:05:22.076 --rc geninfo_unexecuted_blocks=1 00:05:22.076 00:05:22.076 ' 00:05:22.076 06:30:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:22.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.076 --rc genhtml_branch_coverage=1 00:05:22.076 --rc genhtml_function_coverage=1 00:05:22.076 --rc genhtml_legend=1 00:05:22.076 --rc geninfo_all_blocks=1 00:05:22.076 --rc geninfo_unexecuted_blocks=1 00:05:22.076 00:05:22.076 ' 00:05:22.076 06:30:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:22.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.076 --rc genhtml_branch_coverage=1 00:05:22.076 --rc genhtml_function_coverage=1 00:05:22.076 --rc genhtml_legend=1 00:05:22.076 --rc geninfo_all_blocks=1 00:05:22.076 --rc geninfo_unexecuted_blocks=1 00:05:22.076 00:05:22.076 ' 00:05:22.076 06:30:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:22.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.076 --rc genhtml_branch_coverage=1 00:05:22.076 --rc genhtml_function_coverage=1 00:05:22.076 --rc genhtml_legend=1 00:05:22.076 --rc geninfo_all_blocks=1 00:05:22.076 --rc geninfo_unexecuted_blocks=1 00:05:22.076 00:05:22.076 ' 00:05:22.076 06:30:32 -- rpc/rpc.sh@65 -- # spdk_pid=68241 00:05:22.076 06:30:32 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.076 06:30:32 -- rpc/rpc.sh@67 -- # waitforlisten 68241 00:05:22.076 06:30:32 -- common/autotest_common.sh@829 -- # '[' -z 68241 ']' 00:05:22.076 06:30:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.076 06:30:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.076 06:30:32 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:22.076 06:30:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.076 06:30:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.076 06:30:32 -- common/autotest_common.sh@10 -- # set +x 00:05:22.338 [2024-11-28 06:30:32.891803] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:22.338 [2024-11-28 06:30:32.891958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68241 ] 00:05:22.338 [2024-11-28 06:30:33.027200] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.338 [2024-11-28 06:30:33.099750] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:22.338 [2024-11-28 06:30:33.100012] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:22.338 [2024-11-28 06:30:33.100044] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 68241' to capture a snapshot of events at runtime. 00:05:22.338 [2024-11-28 06:30:33.100056] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid68241 for offline analysis/debug. 00:05:22.338 [2024-11-28 06:30:33.100114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.282 06:30:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.282 06:30:33 -- common/autotest_common.sh@862 -- # return 0 00:05:23.282 06:30:33 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:23.282 06:30:33 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:23.282 06:30:33 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:23.282 06:30:33 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:23.282 06:30:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.282 06:30:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.282 06:30:33 -- common/autotest_common.sh@10 -- # set +x 00:05:23.282 ************************************ 00:05:23.282 START TEST rpc_integrity 00:05:23.282 ************************************ 00:05:23.282 06:30:33 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:23.282 06:30:33 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:23.282 06:30:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.282 06:30:33 -- common/autotest_common.sh@10 -- # set +x 00:05:23.282 06:30:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.282 06:30:33 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:23.282 06:30:33 -- rpc/rpc.sh@13 -- # jq length 00:05:23.282 06:30:33 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:23.282 06:30:33 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:23.282 06:30:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.282 06:30:33 -- common/autotest_common.sh@10 -- # set +x 00:05:23.282 06:30:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.282 06:30:33 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:23.282 06:30:33 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:23.282 06:30:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.282 06:30:33 -- common/autotest_common.sh@10 -- # set +x 00:05:23.282 06:30:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.282 06:30:33 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:23.282 { 00:05:23.282 "name": "Malloc0", 00:05:23.282 "aliases": [ 00:05:23.282 "06ca04b0-f55e-49af-ae42-0e2965aab406" 00:05:23.282 ], 00:05:23.282 "product_name": "Malloc disk", 00:05:23.282 "block_size": 512, 00:05:23.282 "num_blocks": 16384, 00:05:23.282 "uuid": "06ca04b0-f55e-49af-ae42-0e2965aab406", 00:05:23.282 "assigned_rate_limits": { 00:05:23.282 "rw_ios_per_sec": 0, 00:05:23.282 "rw_mbytes_per_sec": 0, 00:05:23.282 "r_mbytes_per_sec": 0, 00:05:23.282 "w_mbytes_per_sec": 0 00:05:23.282 }, 00:05:23.282 "claimed": false, 00:05:23.282 "zoned": false, 00:05:23.282 "supported_io_types": { 00:05:23.282 "read": true, 00:05:23.282 "write": true, 00:05:23.282 "unmap": true, 00:05:23.282 "write_zeroes": true, 00:05:23.282 "flush": true, 00:05:23.282 "reset": true, 00:05:23.282 "compare": false, 00:05:23.282 "compare_and_write": false, 00:05:23.282 "abort": true, 00:05:23.282 "nvme_admin": false, 00:05:23.282 "nvme_io": false 00:05:23.282 }, 00:05:23.282 "memory_domains": [ 00:05:23.282 { 00:05:23.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:23.282 "dma_device_type": 2 00:05:23.282 } 00:05:23.282 ], 00:05:23.282 "driver_specific": {} 00:05:23.282 } 00:05:23.282 ]' 00:05:23.282 06:30:33 -- rpc/rpc.sh@17 -- # jq length 00:05:23.282 06:30:33 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:23.282 06:30:33 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:23.282 06:30:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.282 06:30:33 -- common/autotest_common.sh@10 -- # set +x 00:05:23.282 [2024-11-28 06:30:33.858626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:23.282 [2024-11-28 06:30:33.858757] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:23.282 [2024-11-28 06:30:33.858800] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:23.282 [2024-11-28 06:30:33.858824] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:23.282 [2024-11-28 06:30:33.861761] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:23.282 [2024-11-28 06:30:33.861818] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:23.282 Passthru0 00:05:23.282 06:30:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.282 06:30:33 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:23.282 06:30:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.282 06:30:33 -- common/autotest_common.sh@10 -- # set +x 00:05:23.282 06:30:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.282 06:30:33 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:23.282 { 00:05:23.282 "name": "Malloc0", 00:05:23.282 "aliases": [ 00:05:23.282 "06ca04b0-f55e-49af-ae42-0e2965aab406" 00:05:23.282 ], 00:05:23.282 "product_name": "Malloc disk", 00:05:23.282 "block_size": 512, 00:05:23.282 "num_blocks": 16384, 00:05:23.282 "uuid": "06ca04b0-f55e-49af-ae42-0e2965aab406", 00:05:23.282 "assigned_rate_limits": { 00:05:23.282 "rw_ios_per_sec": 0, 00:05:23.282 "rw_mbytes_per_sec": 0, 00:05:23.282 "r_mbytes_per_sec": 0, 00:05:23.282 "w_mbytes_per_sec": 0 00:05:23.282 }, 00:05:23.282 "claimed": true, 00:05:23.282 "claim_type": "exclusive_write", 00:05:23.282 "zoned": false, 00:05:23.282 "supported_io_types": { 00:05:23.282 "read": true, 00:05:23.282 "write": true, 00:05:23.282 "unmap": true, 00:05:23.282 "write_zeroes": true, 00:05:23.282 "flush": true, 00:05:23.282 "reset": true, 00:05:23.282 "compare": false, 00:05:23.282 "compare_and_write": false, 00:05:23.282 "abort": true, 00:05:23.282 "nvme_admin": false, 00:05:23.282 "nvme_io": false 00:05:23.282 }, 00:05:23.282 "memory_domains": [ 00:05:23.282 { 00:05:23.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:23.282 "dma_device_type": 2 00:05:23.282 } 00:05:23.282 ], 00:05:23.282 "driver_specific": {} 00:05:23.282 }, 00:05:23.282 { 00:05:23.282 "name": "Passthru0", 00:05:23.282 "aliases": [ 00:05:23.282 "f175ea45-f0e4-5172-b57f-a166a383560b" 00:05:23.282 ], 00:05:23.282 "product_name": "passthru", 00:05:23.283 "block_size": 512, 00:05:23.283 "num_blocks": 16384, 00:05:23.283 "uuid": "f175ea45-f0e4-5172-b57f-a166a383560b", 00:05:23.283 "assigned_rate_limits": { 00:05:23.283 "rw_ios_per_sec": 0, 00:05:23.283 "rw_mbytes_per_sec": 0, 00:05:23.283 "r_mbytes_per_sec": 0, 00:05:23.283 "w_mbytes_per_sec": 0 00:05:23.283 }, 00:05:23.283 "claimed": false, 00:05:23.283 "zoned": false, 00:05:23.283 "supported_io_types": { 00:05:23.283 "read": true, 00:05:23.283 "write": true, 00:05:23.283 "unmap": true, 00:05:23.283 "write_zeroes": true, 00:05:23.283 "flush": true, 00:05:23.283 "reset": true, 00:05:23.283 "compare": false, 00:05:23.283 "compare_and_write": false, 00:05:23.283 "abort": true, 00:05:23.283 "nvme_admin": false, 00:05:23.283 "nvme_io": false 00:05:23.283 }, 00:05:23.283 "memory_domains": [ 00:05:23.283 { 00:05:23.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:23.283 "dma_device_type": 2 00:05:23.283 } 00:05:23.283 ], 00:05:23.283 "driver_specific": { 00:05:23.283 "passthru": { 00:05:23.283 "name": "Passthru0", 00:05:23.283 "base_bdev_name": "Malloc0" 00:05:23.283 } 00:05:23.283 } 00:05:23.283 } 00:05:23.283 ]' 00:05:23.283 06:30:33 -- rpc/rpc.sh@21 -- # jq length 00:05:23.283 06:30:33 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:23.283 06:30:33 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:23.283 06:30:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.283 06:30:33 -- common/autotest_common.sh@10 -- # set +x 00:05:23.283 06:30:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.283 06:30:33 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:23.283 06:30:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.283 06:30:33 -- common/autotest_common.sh@10 -- # set +x 00:05:23.283 06:30:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.283 06:30:33 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:23.283 06:30:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.283 06:30:33 -- common/autotest_common.sh@10 -- # set +x 00:05:23.283 06:30:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.283 06:30:33 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:23.283 06:30:33 -- rpc/rpc.sh@26 -- # jq length 00:05:23.283 06:30:33 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:23.283 00:05:23.283 real 0m0.238s 00:05:23.283 user 0m0.127s 00:05:23.283 sys 0m0.037s 00:05:23.283 06:30:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:23.283 ************************************ 00:05:23.283 END TEST rpc_integrity 00:05:23.283 ************************************ 00:05:23.283 06:30:33 -- common/autotest_common.sh@10 -- # set +x 00:05:23.283 06:30:34 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:23.283 06:30:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.283 06:30:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.283 06:30:34 -- common/autotest_common.sh@10 -- # set +x 00:05:23.544 ************************************ 00:05:23.544 START TEST rpc_plugins 00:05:23.544 ************************************ 00:05:23.544 06:30:34 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:23.544 06:30:34 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:23.544 06:30:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.544 06:30:34 -- common/autotest_common.sh@10 -- # set +x 00:05:23.544 06:30:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.544 06:30:34 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:23.544 06:30:34 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:23.544 06:30:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.544 06:30:34 -- common/autotest_common.sh@10 -- # set +x 00:05:23.544 06:30:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.544 06:30:34 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:23.544 { 00:05:23.544 "name": "Malloc1", 00:05:23.544 "aliases": [ 00:05:23.544 "494cfb36-fca8-4b53-b55b-ae92f9de2eca" 00:05:23.544 ], 00:05:23.545 "product_name": "Malloc disk", 00:05:23.545 "block_size": 4096, 00:05:23.545 "num_blocks": 256, 00:05:23.545 "uuid": "494cfb36-fca8-4b53-b55b-ae92f9de2eca", 00:05:23.545 "assigned_rate_limits": { 00:05:23.545 "rw_ios_per_sec": 0, 00:05:23.545 "rw_mbytes_per_sec": 0, 00:05:23.545 "r_mbytes_per_sec": 0, 00:05:23.545 "w_mbytes_per_sec": 0 00:05:23.545 }, 00:05:23.545 "claimed": false, 00:05:23.545 "zoned": false, 00:05:23.545 "supported_io_types": { 00:05:23.545 "read": true, 00:05:23.545 "write": true, 00:05:23.545 "unmap": true, 00:05:23.545 "write_zeroes": true, 00:05:23.545 "flush": true, 00:05:23.545 "reset": true, 00:05:23.545 "compare": false, 00:05:23.545 "compare_and_write": false, 00:05:23.545 "abort": true, 00:05:23.545 "nvme_admin": false, 00:05:23.545 "nvme_io": false 00:05:23.545 }, 00:05:23.545 "memory_domains": [ 00:05:23.545 { 00:05:23.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:23.545 "dma_device_type": 2 00:05:23.545 } 00:05:23.545 ], 00:05:23.545 "driver_specific": {} 00:05:23.545 } 00:05:23.545 ]' 00:05:23.545 06:30:34 -- rpc/rpc.sh@32 -- # jq length 00:05:23.545 06:30:34 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:23.545 06:30:34 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:23.545 06:30:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.545 06:30:34 -- common/autotest_common.sh@10 -- # set +x 00:05:23.545 06:30:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.545 06:30:34 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:23.545 06:30:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.545 06:30:34 -- common/autotest_common.sh@10 -- # set +x 00:05:23.545 06:30:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.545 06:30:34 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:23.545 06:30:34 -- rpc/rpc.sh@36 -- # jq length 00:05:23.545 06:30:34 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:23.545 00:05:23.545 real 0m0.118s 00:05:23.545 user 0m0.061s 00:05:23.545 sys 0m0.021s 00:05:23.545 06:30:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:23.545 ************************************ 00:05:23.545 END TEST rpc_plugins 00:05:23.545 ************************************ 00:05:23.545 06:30:34 -- common/autotest_common.sh@10 -- # set +x 00:05:23.545 06:30:34 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:23.545 06:30:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.545 06:30:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.545 06:30:34 -- common/autotest_common.sh@10 -- # set +x 00:05:23.545 ************************************ 00:05:23.545 START TEST rpc_trace_cmd_test 00:05:23.545 ************************************ 00:05:23.545 06:30:34 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:23.545 06:30:34 -- rpc/rpc.sh@40 -- # local info 00:05:23.545 06:30:34 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:23.545 06:30:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.545 06:30:34 -- common/autotest_common.sh@10 -- # set +x 00:05:23.545 06:30:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.545 06:30:34 -- rpc/rpc.sh@42 -- # info='{ 00:05:23.545 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid68241", 00:05:23.545 "tpoint_group_mask": "0x8", 00:05:23.545 "iscsi_conn": { 00:05:23.545 "mask": "0x2", 00:05:23.545 "tpoint_mask": "0x0" 00:05:23.545 }, 00:05:23.545 "scsi": { 00:05:23.545 "mask": "0x4", 00:05:23.545 "tpoint_mask": "0x0" 00:05:23.545 }, 00:05:23.545 "bdev": { 00:05:23.545 "mask": "0x8", 00:05:23.545 "tpoint_mask": "0xffffffffffffffff" 00:05:23.545 }, 00:05:23.545 "nvmf_rdma": { 00:05:23.545 "mask": "0x10", 00:05:23.545 "tpoint_mask": "0x0" 00:05:23.545 }, 00:05:23.545 "nvmf_tcp": { 00:05:23.545 "mask": "0x20", 00:05:23.545 "tpoint_mask": "0x0" 00:05:23.545 }, 00:05:23.545 "ftl": { 00:05:23.545 "mask": "0x40", 00:05:23.545 "tpoint_mask": "0x0" 00:05:23.545 }, 00:05:23.545 "blobfs": { 00:05:23.545 "mask": "0x80", 00:05:23.545 "tpoint_mask": "0x0" 00:05:23.545 }, 00:05:23.545 "dsa": { 00:05:23.545 "mask": "0x200", 00:05:23.545 "tpoint_mask": "0x0" 00:05:23.545 }, 00:05:23.545 "thread": { 00:05:23.545 "mask": "0x400", 00:05:23.545 "tpoint_mask": "0x0" 00:05:23.545 }, 00:05:23.545 "nvme_pcie": { 00:05:23.545 "mask": "0x800", 00:05:23.545 "tpoint_mask": "0x0" 00:05:23.545 }, 00:05:23.545 "iaa": { 00:05:23.545 "mask": "0x1000", 00:05:23.545 "tpoint_mask": "0x0" 00:05:23.545 }, 00:05:23.545 "nvme_tcp": { 00:05:23.545 "mask": "0x2000", 00:05:23.545 "tpoint_mask": "0x0" 00:05:23.545 }, 00:05:23.545 "bdev_nvme": { 00:05:23.545 "mask": "0x4000", 00:05:23.545 "tpoint_mask": "0x0" 00:05:23.545 } 00:05:23.545 }' 00:05:23.545 06:30:34 -- rpc/rpc.sh@43 -- # jq length 00:05:23.545 06:30:34 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:23.545 06:30:34 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:23.806 06:30:34 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:23.806 06:30:34 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:23.806 06:30:34 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:23.806 06:30:34 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:23.806 06:30:34 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:23.806 06:30:34 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:23.806 06:30:34 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:23.806 00:05:23.806 real 0m0.185s 00:05:23.806 user 0m0.141s 00:05:23.806 sys 0m0.032s 00:05:23.806 ************************************ 00:05:23.806 06:30:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:23.806 06:30:34 -- common/autotest_common.sh@10 -- # set +x 00:05:23.806 END TEST rpc_trace_cmd_test 00:05:23.806 ************************************ 00:05:23.806 06:30:34 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:23.806 06:30:34 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:23.806 06:30:34 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:23.806 06:30:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.806 06:30:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.806 06:30:34 -- common/autotest_common.sh@10 -- # set +x 00:05:23.806 ************************************ 00:05:23.806 START TEST rpc_daemon_integrity 00:05:23.806 ************************************ 00:05:23.806 06:30:34 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:23.806 06:30:34 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:23.806 06:30:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.806 06:30:34 -- common/autotest_common.sh@10 -- # set +x 00:05:23.806 06:30:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.806 06:30:34 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:23.806 06:30:34 -- rpc/rpc.sh@13 -- # jq length 00:05:23.806 06:30:34 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:23.806 06:30:34 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:23.806 06:30:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.806 06:30:34 -- common/autotest_common.sh@10 -- # set +x 00:05:23.806 06:30:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.806 06:30:34 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:23.806 06:30:34 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:23.806 06:30:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:23.806 06:30:34 -- common/autotest_common.sh@10 -- # set +x 00:05:23.806 06:30:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.806 06:30:34 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:23.806 { 00:05:23.806 "name": "Malloc2", 00:05:23.806 "aliases": [ 00:05:23.806 "66e4d865-006e-412c-bc0c-d50ae4cce024" 00:05:23.806 ], 00:05:23.806 "product_name": "Malloc disk", 00:05:23.806 "block_size": 512, 00:05:23.806 "num_blocks": 16384, 00:05:23.806 "uuid": "66e4d865-006e-412c-bc0c-d50ae4cce024", 00:05:23.806 "assigned_rate_limits": { 00:05:23.806 "rw_ios_per_sec": 0, 00:05:23.807 "rw_mbytes_per_sec": 0, 00:05:23.807 "r_mbytes_per_sec": 0, 00:05:23.807 "w_mbytes_per_sec": 0 00:05:23.807 }, 00:05:23.807 "claimed": false, 00:05:23.807 "zoned": false, 00:05:23.807 "supported_io_types": { 00:05:23.807 "read": true, 00:05:23.807 "write": true, 00:05:23.807 "unmap": true, 00:05:23.807 "write_zeroes": true, 00:05:23.807 "flush": true, 00:05:23.807 "reset": true, 00:05:23.807 "compare": false, 00:05:23.807 "compare_and_write": false, 00:05:23.807 "abort": true, 00:05:23.807 "nvme_admin": false, 00:05:23.807 "nvme_io": false 00:05:23.807 }, 00:05:23.807 "memory_domains": [ 00:05:23.807 { 00:05:23.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:23.807 "dma_device_type": 2 00:05:23.807 } 00:05:23.807 ], 00:05:23.807 "driver_specific": {} 00:05:23.807 } 00:05:23.807 ]' 00:05:23.807 06:30:34 -- rpc/rpc.sh@17 -- # jq length 00:05:24.068 06:30:34 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:24.068 06:30:34 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:24.068 06:30:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.068 06:30:34 -- common/autotest_common.sh@10 -- # set +x 00:05:24.068 [2024-11-28 06:30:34.598668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:24.068 [2024-11-28 06:30:34.598793] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:24.068 [2024-11-28 06:30:34.598820] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:24.068 [2024-11-28 06:30:34.598834] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:24.068 [2024-11-28 06:30:34.601553] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:24.068 [2024-11-28 06:30:34.601619] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:24.068 Passthru0 00:05:24.068 06:30:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.068 06:30:34 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:24.068 06:30:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.068 06:30:34 -- common/autotest_common.sh@10 -- # set +x 00:05:24.068 06:30:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.068 06:30:34 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:24.068 { 00:05:24.068 "name": "Malloc2", 00:05:24.068 "aliases": [ 00:05:24.068 "66e4d865-006e-412c-bc0c-d50ae4cce024" 00:05:24.068 ], 00:05:24.068 "product_name": "Malloc disk", 00:05:24.068 "block_size": 512, 00:05:24.068 "num_blocks": 16384, 00:05:24.068 "uuid": "66e4d865-006e-412c-bc0c-d50ae4cce024", 00:05:24.068 "assigned_rate_limits": { 00:05:24.068 "rw_ios_per_sec": 0, 00:05:24.068 "rw_mbytes_per_sec": 0, 00:05:24.068 "r_mbytes_per_sec": 0, 00:05:24.068 "w_mbytes_per_sec": 0 00:05:24.068 }, 00:05:24.068 "claimed": true, 00:05:24.068 "claim_type": "exclusive_write", 00:05:24.068 "zoned": false, 00:05:24.068 "supported_io_types": { 00:05:24.068 "read": true, 00:05:24.068 "write": true, 00:05:24.068 "unmap": true, 00:05:24.068 "write_zeroes": true, 00:05:24.068 "flush": true, 00:05:24.068 "reset": true, 00:05:24.068 "compare": false, 00:05:24.068 "compare_and_write": false, 00:05:24.068 "abort": true, 00:05:24.068 "nvme_admin": false, 00:05:24.068 "nvme_io": false 00:05:24.068 }, 00:05:24.068 "memory_domains": [ 00:05:24.068 { 00:05:24.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.068 "dma_device_type": 2 00:05:24.068 } 00:05:24.068 ], 00:05:24.068 "driver_specific": {} 00:05:24.068 }, 00:05:24.069 { 00:05:24.069 "name": "Passthru0", 00:05:24.069 "aliases": [ 00:05:24.069 "57a9c569-b9ca-5d8f-92b8-ee56ad84c593" 00:05:24.069 ], 00:05:24.069 "product_name": "passthru", 00:05:24.069 "block_size": 512, 00:05:24.069 "num_blocks": 16384, 00:05:24.069 "uuid": "57a9c569-b9ca-5d8f-92b8-ee56ad84c593", 00:05:24.069 "assigned_rate_limits": { 00:05:24.069 "rw_ios_per_sec": 0, 00:05:24.069 "rw_mbytes_per_sec": 0, 00:05:24.069 "r_mbytes_per_sec": 0, 00:05:24.069 "w_mbytes_per_sec": 0 00:05:24.069 }, 00:05:24.069 "claimed": false, 00:05:24.069 "zoned": false, 00:05:24.069 "supported_io_types": { 00:05:24.069 "read": true, 00:05:24.069 "write": true, 00:05:24.069 "unmap": true, 00:05:24.069 "write_zeroes": true, 00:05:24.069 "flush": true, 00:05:24.069 "reset": true, 00:05:24.069 "compare": false, 00:05:24.069 "compare_and_write": false, 00:05:24.069 "abort": true, 00:05:24.069 "nvme_admin": false, 00:05:24.069 "nvme_io": false 00:05:24.069 }, 00:05:24.069 "memory_domains": [ 00:05:24.069 { 00:05:24.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.069 "dma_device_type": 2 00:05:24.069 } 00:05:24.069 ], 00:05:24.069 "driver_specific": { 00:05:24.069 "passthru": { 00:05:24.069 "name": "Passthru0", 00:05:24.069 "base_bdev_name": "Malloc2" 00:05:24.069 } 00:05:24.069 } 00:05:24.069 } 00:05:24.069 ]' 00:05:24.069 06:30:34 -- rpc/rpc.sh@21 -- # jq length 00:05:24.069 06:30:34 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:24.069 06:30:34 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:24.069 06:30:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.069 06:30:34 -- common/autotest_common.sh@10 -- # set +x 00:05:24.069 06:30:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.069 06:30:34 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:24.069 06:30:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.069 06:30:34 -- common/autotest_common.sh@10 -- # set +x 00:05:24.069 06:30:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.069 06:30:34 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:24.069 06:30:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.069 06:30:34 -- common/autotest_common.sh@10 -- # set +x 00:05:24.069 06:30:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.069 06:30:34 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:24.069 06:30:34 -- rpc/rpc.sh@26 -- # jq length 00:05:24.069 06:30:34 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:24.069 00:05:24.069 real 0m0.223s 00:05:24.069 user 0m0.125s 00:05:24.069 sys 0m0.029s 00:05:24.069 ************************************ 00:05:24.069 END TEST rpc_daemon_integrity 00:05:24.069 ************************************ 00:05:24.069 06:30:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:24.069 06:30:34 -- common/autotest_common.sh@10 -- # set +x 00:05:24.069 06:30:34 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:24.069 06:30:34 -- rpc/rpc.sh@84 -- # killprocess 68241 00:05:24.069 06:30:34 -- common/autotest_common.sh@936 -- # '[' -z 68241 ']' 00:05:24.069 06:30:34 -- common/autotest_common.sh@940 -- # kill -0 68241 00:05:24.069 06:30:34 -- common/autotest_common.sh@941 -- # uname 00:05:24.069 06:30:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:24.069 06:30:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68241 00:05:24.069 06:30:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:24.069 killing process with pid 68241 00:05:24.069 06:30:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:24.069 06:30:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68241' 00:05:24.069 06:30:34 -- common/autotest_common.sh@955 -- # kill 68241 00:05:24.069 06:30:34 -- common/autotest_common.sh@960 -- # wait 68241 00:05:24.640 00:05:24.640 real 0m2.551s 00:05:24.640 user 0m2.806s 00:05:24.640 sys 0m0.794s 00:05:24.640 06:30:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:24.640 06:30:35 -- common/autotest_common.sh@10 -- # set +x 00:05:24.640 ************************************ 00:05:24.640 END TEST rpc 00:05:24.640 ************************************ 00:05:24.640 06:30:35 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:24.640 06:30:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:24.640 06:30:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.640 06:30:35 -- common/autotest_common.sh@10 -- # set +x 00:05:24.640 ************************************ 00:05:24.640 START TEST rpc_client 00:05:24.640 ************************************ 00:05:24.640 06:30:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:24.640 * Looking for test storage... 00:05:24.640 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:24.640 06:30:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:24.640 06:30:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:24.640 06:30:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:24.640 06:30:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:24.640 06:30:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:24.640 06:30:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:24.640 06:30:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:24.640 06:30:35 -- scripts/common.sh@335 -- # IFS=.-: 00:05:24.640 06:30:35 -- scripts/common.sh@335 -- # read -ra ver1 00:05:24.640 06:30:35 -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.640 06:30:35 -- scripts/common.sh@336 -- # read -ra ver2 00:05:24.640 06:30:35 -- scripts/common.sh@337 -- # local 'op=<' 00:05:24.640 06:30:35 -- scripts/common.sh@339 -- # ver1_l=2 00:05:24.640 06:30:35 -- scripts/common.sh@340 -- # ver2_l=1 00:05:24.640 06:30:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:24.640 06:30:35 -- scripts/common.sh@343 -- # case "$op" in 00:05:24.640 06:30:35 -- scripts/common.sh@344 -- # : 1 00:05:24.640 06:30:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:24.640 06:30:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.640 06:30:35 -- scripts/common.sh@364 -- # decimal 1 00:05:24.640 06:30:35 -- scripts/common.sh@352 -- # local d=1 00:05:24.640 06:30:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.640 06:30:35 -- scripts/common.sh@354 -- # echo 1 00:05:24.640 06:30:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:24.640 06:30:35 -- scripts/common.sh@365 -- # decimal 2 00:05:24.640 06:30:35 -- scripts/common.sh@352 -- # local d=2 00:05:24.640 06:30:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.640 06:30:35 -- scripts/common.sh@354 -- # echo 2 00:05:24.640 06:30:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:24.640 06:30:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:24.640 06:30:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:24.640 06:30:35 -- scripts/common.sh@367 -- # return 0 00:05:24.640 06:30:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.640 06:30:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:24.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.640 --rc genhtml_branch_coverage=1 00:05:24.640 --rc genhtml_function_coverage=1 00:05:24.640 --rc genhtml_legend=1 00:05:24.640 --rc geninfo_all_blocks=1 00:05:24.640 --rc geninfo_unexecuted_blocks=1 00:05:24.640 00:05:24.640 ' 00:05:24.640 06:30:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:24.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.640 --rc genhtml_branch_coverage=1 00:05:24.640 --rc genhtml_function_coverage=1 00:05:24.640 --rc genhtml_legend=1 00:05:24.640 --rc geninfo_all_blocks=1 00:05:24.640 --rc geninfo_unexecuted_blocks=1 00:05:24.640 00:05:24.640 ' 00:05:24.640 06:30:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:24.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.640 --rc genhtml_branch_coverage=1 00:05:24.640 --rc genhtml_function_coverage=1 00:05:24.640 --rc genhtml_legend=1 00:05:24.640 --rc geninfo_all_blocks=1 00:05:24.640 --rc geninfo_unexecuted_blocks=1 00:05:24.640 00:05:24.640 ' 00:05:24.640 06:30:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:24.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.640 --rc genhtml_branch_coverage=1 00:05:24.640 --rc genhtml_function_coverage=1 00:05:24.640 --rc genhtml_legend=1 00:05:24.640 --rc geninfo_all_blocks=1 00:05:24.640 --rc geninfo_unexecuted_blocks=1 00:05:24.640 00:05:24.640 ' 00:05:24.641 06:30:35 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:24.901 OK 00:05:24.901 06:30:35 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:24.901 00:05:24.901 real 0m0.174s 00:05:24.901 user 0m0.094s 00:05:24.901 sys 0m0.088s 00:05:24.901 06:30:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:24.901 06:30:35 -- common/autotest_common.sh@10 -- # set +x 00:05:24.901 ************************************ 00:05:24.901 END TEST rpc_client 00:05:24.901 ************************************ 00:05:24.901 06:30:35 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:24.901 06:30:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:24.901 06:30:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.901 06:30:35 -- common/autotest_common.sh@10 -- # set +x 00:05:24.901 ************************************ 00:05:24.901 START TEST json_config 00:05:24.901 ************************************ 00:05:24.901 06:30:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:24.901 06:30:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:24.901 06:30:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:24.901 06:30:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:24.901 06:30:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:24.901 06:30:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:24.901 06:30:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:24.901 06:30:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:24.901 06:30:35 -- scripts/common.sh@335 -- # IFS=.-: 00:05:24.901 06:30:35 -- scripts/common.sh@335 -- # read -ra ver1 00:05:24.901 06:30:35 -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.901 06:30:35 -- scripts/common.sh@336 -- # read -ra ver2 00:05:24.901 06:30:35 -- scripts/common.sh@337 -- # local 'op=<' 00:05:24.901 06:30:35 -- scripts/common.sh@339 -- # ver1_l=2 00:05:24.901 06:30:35 -- scripts/common.sh@340 -- # ver2_l=1 00:05:24.901 06:30:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:24.901 06:30:35 -- scripts/common.sh@343 -- # case "$op" in 00:05:24.901 06:30:35 -- scripts/common.sh@344 -- # : 1 00:05:24.901 06:30:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:24.901 06:30:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.901 06:30:35 -- scripts/common.sh@364 -- # decimal 1 00:05:24.901 06:30:35 -- scripts/common.sh@352 -- # local d=1 00:05:24.901 06:30:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.901 06:30:35 -- scripts/common.sh@354 -- # echo 1 00:05:24.901 06:30:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:24.901 06:30:35 -- scripts/common.sh@365 -- # decimal 2 00:05:24.901 06:30:35 -- scripts/common.sh@352 -- # local d=2 00:05:24.901 06:30:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.901 06:30:35 -- scripts/common.sh@354 -- # echo 2 00:05:24.901 06:30:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:24.901 06:30:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:24.901 06:30:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:24.901 06:30:35 -- scripts/common.sh@367 -- # return 0 00:05:24.901 06:30:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.901 06:30:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:24.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.901 --rc genhtml_branch_coverage=1 00:05:24.901 --rc genhtml_function_coverage=1 00:05:24.901 --rc genhtml_legend=1 00:05:24.901 --rc geninfo_all_blocks=1 00:05:24.901 --rc geninfo_unexecuted_blocks=1 00:05:24.901 00:05:24.901 ' 00:05:24.901 06:30:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:24.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.901 --rc genhtml_branch_coverage=1 00:05:24.901 --rc genhtml_function_coverage=1 00:05:24.901 --rc genhtml_legend=1 00:05:24.901 --rc geninfo_all_blocks=1 00:05:24.901 --rc geninfo_unexecuted_blocks=1 00:05:24.901 00:05:24.901 ' 00:05:24.901 06:30:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:24.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.901 --rc genhtml_branch_coverage=1 00:05:24.902 --rc genhtml_function_coverage=1 00:05:24.902 --rc genhtml_legend=1 00:05:24.902 --rc geninfo_all_blocks=1 00:05:24.902 --rc geninfo_unexecuted_blocks=1 00:05:24.902 00:05:24.902 ' 00:05:24.902 06:30:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:24.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.902 --rc genhtml_branch_coverage=1 00:05:24.902 --rc genhtml_function_coverage=1 00:05:24.902 --rc genhtml_legend=1 00:05:24.902 --rc geninfo_all_blocks=1 00:05:24.902 --rc geninfo_unexecuted_blocks=1 00:05:24.902 00:05:24.902 ' 00:05:24.902 06:30:35 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:24.902 06:30:35 -- nvmf/common.sh@7 -- # uname -s 00:05:24.902 06:30:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:24.902 06:30:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:24.902 06:30:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:24.902 06:30:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:24.902 06:30:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:24.902 06:30:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:24.902 06:30:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:24.902 06:30:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:24.902 06:30:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:24.902 06:30:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:24.902 06:30:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d94a1e48-332c-4779-8460-834f9d0a8e4e 00:05:24.902 06:30:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=d94a1e48-332c-4779-8460-834f9d0a8e4e 00:05:24.902 06:30:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:24.902 06:30:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:24.902 06:30:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:24.902 06:30:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:24.902 06:30:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:24.902 06:30:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.902 06:30:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.902 06:30:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.902 06:30:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.902 06:30:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.902 06:30:35 -- paths/export.sh@5 -- # export PATH 00:05:24.902 06:30:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.902 06:30:35 -- nvmf/common.sh@46 -- # : 0 00:05:24.902 06:30:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:24.902 06:30:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:24.902 06:30:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:24.902 06:30:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:24.902 06:30:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:24.902 06:30:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:24.902 06:30:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:24.902 06:30:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:24.902 06:30:35 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:24.902 06:30:35 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:24.902 06:30:35 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:24.902 06:30:35 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:24.902 WARNING: No tests are enabled so not running JSON configuration tests 00:05:24.902 06:30:35 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:24.902 06:30:35 -- json_config/json_config.sh@27 -- # exit 0 00:05:24.902 00:05:24.902 real 0m0.133s 00:05:24.902 user 0m0.094s 00:05:24.902 sys 0m0.042s 00:05:24.902 06:30:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:24.902 06:30:35 -- common/autotest_common.sh@10 -- # set +x 00:05:24.902 ************************************ 00:05:24.902 END TEST json_config 00:05:24.902 ************************************ 00:05:24.902 06:30:35 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:24.902 06:30:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:24.902 06:30:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.902 06:30:35 -- common/autotest_common.sh@10 -- # set +x 00:05:24.902 ************************************ 00:05:24.902 START TEST json_config_extra_key 00:05:24.902 ************************************ 00:05:24.902 06:30:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:25.163 06:30:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:25.163 06:30:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:25.163 06:30:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:25.163 06:30:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:25.163 06:30:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:25.163 06:30:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:25.163 06:30:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:25.163 06:30:35 -- scripts/common.sh@335 -- # IFS=.-: 00:05:25.163 06:30:35 -- scripts/common.sh@335 -- # read -ra ver1 00:05:25.163 06:30:35 -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.164 06:30:35 -- scripts/common.sh@336 -- # read -ra ver2 00:05:25.164 06:30:35 -- scripts/common.sh@337 -- # local 'op=<' 00:05:25.164 06:30:35 -- scripts/common.sh@339 -- # ver1_l=2 00:05:25.164 06:30:35 -- scripts/common.sh@340 -- # ver2_l=1 00:05:25.164 06:30:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:25.164 06:30:35 -- scripts/common.sh@343 -- # case "$op" in 00:05:25.164 06:30:35 -- scripts/common.sh@344 -- # : 1 00:05:25.164 06:30:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:25.164 06:30:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.164 06:30:35 -- scripts/common.sh@364 -- # decimal 1 00:05:25.164 06:30:35 -- scripts/common.sh@352 -- # local d=1 00:05:25.164 06:30:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.164 06:30:35 -- scripts/common.sh@354 -- # echo 1 00:05:25.164 06:30:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:25.164 06:30:35 -- scripts/common.sh@365 -- # decimal 2 00:05:25.164 06:30:35 -- scripts/common.sh@352 -- # local d=2 00:05:25.164 06:30:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.164 06:30:35 -- scripts/common.sh@354 -- # echo 2 00:05:25.164 06:30:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:25.164 06:30:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:25.164 06:30:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:25.164 06:30:35 -- scripts/common.sh@367 -- # return 0 00:05:25.164 06:30:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.164 06:30:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:25.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.164 --rc genhtml_branch_coverage=1 00:05:25.164 --rc genhtml_function_coverage=1 00:05:25.164 --rc genhtml_legend=1 00:05:25.164 --rc geninfo_all_blocks=1 00:05:25.164 --rc geninfo_unexecuted_blocks=1 00:05:25.164 00:05:25.164 ' 00:05:25.164 06:30:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:25.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.164 --rc genhtml_branch_coverage=1 00:05:25.164 --rc genhtml_function_coverage=1 00:05:25.164 --rc genhtml_legend=1 00:05:25.164 --rc geninfo_all_blocks=1 00:05:25.164 --rc geninfo_unexecuted_blocks=1 00:05:25.164 00:05:25.164 ' 00:05:25.164 06:30:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:25.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.164 --rc genhtml_branch_coverage=1 00:05:25.164 --rc genhtml_function_coverage=1 00:05:25.164 --rc genhtml_legend=1 00:05:25.164 --rc geninfo_all_blocks=1 00:05:25.164 --rc geninfo_unexecuted_blocks=1 00:05:25.164 00:05:25.164 ' 00:05:25.164 06:30:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:25.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.164 --rc genhtml_branch_coverage=1 00:05:25.164 --rc genhtml_function_coverage=1 00:05:25.164 --rc genhtml_legend=1 00:05:25.164 --rc geninfo_all_blocks=1 00:05:25.164 --rc geninfo_unexecuted_blocks=1 00:05:25.164 00:05:25.164 ' 00:05:25.164 06:30:35 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:25.164 06:30:35 -- nvmf/common.sh@7 -- # uname -s 00:05:25.164 06:30:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:25.164 06:30:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:25.164 06:30:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:25.164 06:30:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:25.164 06:30:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:25.164 06:30:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:25.164 06:30:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:25.164 06:30:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:25.164 06:30:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:25.164 06:30:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:25.164 06:30:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d94a1e48-332c-4779-8460-834f9d0a8e4e 00:05:25.164 06:30:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=d94a1e48-332c-4779-8460-834f9d0a8e4e 00:05:25.164 06:30:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:25.164 06:30:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:25.164 06:30:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:25.164 06:30:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:25.164 06:30:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:25.164 06:30:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:25.164 06:30:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:25.164 06:30:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.164 06:30:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.164 06:30:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.164 06:30:35 -- paths/export.sh@5 -- # export PATH 00:05:25.164 06:30:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.164 06:30:35 -- nvmf/common.sh@46 -- # : 0 00:05:25.164 06:30:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:25.164 06:30:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:25.164 06:30:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:25.164 06:30:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:25.164 06:30:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:25.164 06:30:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:25.164 06:30:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:25.164 06:30:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:25.164 06:30:35 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:25.164 06:30:35 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:25.164 06:30:35 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:25.164 06:30:35 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:25.164 06:30:35 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:25.164 06:30:35 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:25.164 06:30:35 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:25.164 06:30:35 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:25.164 06:30:35 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:25.164 INFO: launching applications... 00:05:25.164 06:30:35 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:25.164 06:30:35 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:25.164 06:30:35 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:25.164 06:30:35 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:25.164 06:30:35 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:25.164 06:30:35 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:25.164 06:30:35 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=68535 00:05:25.164 Waiting for target to run... 00:05:25.164 06:30:35 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:25.164 06:30:35 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 68535 /var/tmp/spdk_tgt.sock 00:05:25.164 06:30:35 -- common/autotest_common.sh@829 -- # '[' -z 68535 ']' 00:05:25.164 06:30:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:25.164 06:30:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:25.164 06:30:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:25.164 06:30:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.164 06:30:35 -- common/autotest_common.sh@10 -- # set +x 00:05:25.164 06:30:35 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:25.164 [2024-11-28 06:30:35.863591] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:25.164 [2024-11-28 06:30:35.863719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68535 ] 00:05:25.424 [2024-11-28 06:30:36.148791] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.424 [2024-11-28 06:30:36.169809] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:25.424 [2024-11-28 06:30:36.170018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.995 06:30:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.995 00:05:25.995 06:30:36 -- common/autotest_common.sh@862 -- # return 0 00:05:25.995 06:30:36 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:25.995 INFO: shutting down applications... 00:05:25.995 06:30:36 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:25.995 06:30:36 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:25.995 06:30:36 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:25.995 06:30:36 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:25.995 06:30:36 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 68535 ]] 00:05:25.995 06:30:36 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 68535 00:05:25.995 06:30:36 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:25.995 06:30:36 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:25.995 06:30:36 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68535 00:05:25.995 06:30:36 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:26.572 06:30:37 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:26.572 06:30:37 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:26.572 06:30:37 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68535 00:05:26.572 06:30:37 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:26.572 06:30:37 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:26.572 06:30:37 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:26.572 SPDK target shutdown done 00:05:26.572 06:30:37 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:26.572 Success 00:05:26.572 06:30:37 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:26.572 00:05:26.572 real 0m1.526s 00:05:26.572 user 0m1.278s 00:05:26.572 sys 0m0.340s 00:05:26.572 ************************************ 00:05:26.572 END TEST json_config_extra_key 00:05:26.572 ************************************ 00:05:26.572 06:30:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:26.572 06:30:37 -- common/autotest_common.sh@10 -- # set +x 00:05:26.572 06:30:37 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:26.572 06:30:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:26.572 06:30:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.572 06:30:37 -- common/autotest_common.sh@10 -- # set +x 00:05:26.572 ************************************ 00:05:26.572 START TEST alias_rpc 00:05:26.572 ************************************ 00:05:26.572 06:30:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:26.572 * Looking for test storage... 00:05:26.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:26.572 06:30:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:26.572 06:30:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:26.572 06:30:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:26.834 06:30:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:26.834 06:30:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:26.834 06:30:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:26.834 06:30:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:26.834 06:30:37 -- scripts/common.sh@335 -- # IFS=.-: 00:05:26.834 06:30:37 -- scripts/common.sh@335 -- # read -ra ver1 00:05:26.834 06:30:37 -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.834 06:30:37 -- scripts/common.sh@336 -- # read -ra ver2 00:05:26.834 06:30:37 -- scripts/common.sh@337 -- # local 'op=<' 00:05:26.834 06:30:37 -- scripts/common.sh@339 -- # ver1_l=2 00:05:26.834 06:30:37 -- scripts/common.sh@340 -- # ver2_l=1 00:05:26.834 06:30:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:26.834 06:30:37 -- scripts/common.sh@343 -- # case "$op" in 00:05:26.834 06:30:37 -- scripts/common.sh@344 -- # : 1 00:05:26.834 06:30:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:26.834 06:30:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.834 06:30:37 -- scripts/common.sh@364 -- # decimal 1 00:05:26.834 06:30:37 -- scripts/common.sh@352 -- # local d=1 00:05:26.834 06:30:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.834 06:30:37 -- scripts/common.sh@354 -- # echo 1 00:05:26.834 06:30:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:26.834 06:30:37 -- scripts/common.sh@365 -- # decimal 2 00:05:26.834 06:30:37 -- scripts/common.sh@352 -- # local d=2 00:05:26.834 06:30:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.834 06:30:37 -- scripts/common.sh@354 -- # echo 2 00:05:26.834 06:30:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:26.834 06:30:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:26.834 06:30:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:26.834 06:30:37 -- scripts/common.sh@367 -- # return 0 00:05:26.834 06:30:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.834 06:30:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:26.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.834 --rc genhtml_branch_coverage=1 00:05:26.834 --rc genhtml_function_coverage=1 00:05:26.834 --rc genhtml_legend=1 00:05:26.834 --rc geninfo_all_blocks=1 00:05:26.834 --rc geninfo_unexecuted_blocks=1 00:05:26.834 00:05:26.834 ' 00:05:26.834 06:30:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:26.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.834 --rc genhtml_branch_coverage=1 00:05:26.834 --rc genhtml_function_coverage=1 00:05:26.834 --rc genhtml_legend=1 00:05:26.834 --rc geninfo_all_blocks=1 00:05:26.834 --rc geninfo_unexecuted_blocks=1 00:05:26.834 00:05:26.834 ' 00:05:26.834 06:30:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:26.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.834 --rc genhtml_branch_coverage=1 00:05:26.834 --rc genhtml_function_coverage=1 00:05:26.834 --rc genhtml_legend=1 00:05:26.834 --rc geninfo_all_blocks=1 00:05:26.834 --rc geninfo_unexecuted_blocks=1 00:05:26.834 00:05:26.834 ' 00:05:26.834 06:30:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:26.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.834 --rc genhtml_branch_coverage=1 00:05:26.834 --rc genhtml_function_coverage=1 00:05:26.834 --rc genhtml_legend=1 00:05:26.834 --rc geninfo_all_blocks=1 00:05:26.834 --rc geninfo_unexecuted_blocks=1 00:05:26.834 00:05:26.834 ' 00:05:26.834 06:30:37 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:26.834 06:30:37 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=68602 00:05:26.834 06:30:37 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 68602 00:05:26.834 06:30:37 -- common/autotest_common.sh@829 -- # '[' -z 68602 ']' 00:05:26.834 06:30:37 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:26.834 06:30:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.834 06:30:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.834 06:30:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.834 06:30:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.834 06:30:37 -- common/autotest_common.sh@10 -- # set +x 00:05:26.834 [2024-11-28 06:30:37.444986] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:26.834 [2024-11-28 06:30:37.445215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68602 ] 00:05:26.834 [2024-11-28 06:30:37.581191] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.095 [2024-11-28 06:30:37.623007] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:27.095 [2024-11-28 06:30:37.623417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.668 06:30:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.668 06:30:38 -- common/autotest_common.sh@862 -- # return 0 00:05:27.668 06:30:38 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:27.931 06:30:38 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 68602 00:05:27.931 06:30:38 -- common/autotest_common.sh@936 -- # '[' -z 68602 ']' 00:05:27.931 06:30:38 -- common/autotest_common.sh@940 -- # kill -0 68602 00:05:27.931 06:30:38 -- common/autotest_common.sh@941 -- # uname 00:05:27.931 06:30:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:27.931 06:30:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68602 00:05:27.931 killing process with pid 68602 00:05:27.931 06:30:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:27.931 06:30:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:27.931 06:30:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68602' 00:05:27.931 06:30:38 -- common/autotest_common.sh@955 -- # kill 68602 00:05:27.931 06:30:38 -- common/autotest_common.sh@960 -- # wait 68602 00:05:28.193 ************************************ 00:05:28.193 END TEST alias_rpc 00:05:28.193 ************************************ 00:05:28.193 00:05:28.193 real 0m1.569s 00:05:28.193 user 0m1.615s 00:05:28.193 sys 0m0.407s 00:05:28.193 06:30:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:28.193 06:30:38 -- common/autotest_common.sh@10 -- # set +x 00:05:28.193 06:30:38 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:05:28.193 06:30:38 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:28.193 06:30:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:28.193 06:30:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.193 06:30:38 -- common/autotest_common.sh@10 -- # set +x 00:05:28.193 ************************************ 00:05:28.193 START TEST spdkcli_tcp 00:05:28.193 ************************************ 00:05:28.193 06:30:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:28.193 * Looking for test storage... 00:05:28.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:28.194 06:30:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:28.194 06:30:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:28.194 06:30:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:28.455 06:30:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:28.455 06:30:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:28.455 06:30:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:28.455 06:30:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:28.455 06:30:38 -- scripts/common.sh@335 -- # IFS=.-: 00:05:28.455 06:30:38 -- scripts/common.sh@335 -- # read -ra ver1 00:05:28.455 06:30:38 -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.455 06:30:38 -- scripts/common.sh@336 -- # read -ra ver2 00:05:28.455 06:30:38 -- scripts/common.sh@337 -- # local 'op=<' 00:05:28.455 06:30:38 -- scripts/common.sh@339 -- # ver1_l=2 00:05:28.455 06:30:38 -- scripts/common.sh@340 -- # ver2_l=1 00:05:28.455 06:30:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:28.455 06:30:38 -- scripts/common.sh@343 -- # case "$op" in 00:05:28.455 06:30:38 -- scripts/common.sh@344 -- # : 1 00:05:28.455 06:30:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:28.455 06:30:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.455 06:30:38 -- scripts/common.sh@364 -- # decimal 1 00:05:28.455 06:30:38 -- scripts/common.sh@352 -- # local d=1 00:05:28.455 06:30:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.455 06:30:38 -- scripts/common.sh@354 -- # echo 1 00:05:28.455 06:30:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:28.455 06:30:38 -- scripts/common.sh@365 -- # decimal 2 00:05:28.455 06:30:38 -- scripts/common.sh@352 -- # local d=2 00:05:28.455 06:30:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.455 06:30:38 -- scripts/common.sh@354 -- # echo 2 00:05:28.455 06:30:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:28.455 06:30:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:28.455 06:30:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:28.455 06:30:38 -- scripts/common.sh@367 -- # return 0 00:05:28.455 06:30:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.455 06:30:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:28.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.455 --rc genhtml_branch_coverage=1 00:05:28.455 --rc genhtml_function_coverage=1 00:05:28.455 --rc genhtml_legend=1 00:05:28.455 --rc geninfo_all_blocks=1 00:05:28.455 --rc geninfo_unexecuted_blocks=1 00:05:28.455 00:05:28.455 ' 00:05:28.455 06:30:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:28.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.455 --rc genhtml_branch_coverage=1 00:05:28.455 --rc genhtml_function_coverage=1 00:05:28.455 --rc genhtml_legend=1 00:05:28.455 --rc geninfo_all_blocks=1 00:05:28.455 --rc geninfo_unexecuted_blocks=1 00:05:28.455 00:05:28.455 ' 00:05:28.455 06:30:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:28.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.455 --rc genhtml_branch_coverage=1 00:05:28.455 --rc genhtml_function_coverage=1 00:05:28.455 --rc genhtml_legend=1 00:05:28.455 --rc geninfo_all_blocks=1 00:05:28.455 --rc geninfo_unexecuted_blocks=1 00:05:28.455 00:05:28.455 ' 00:05:28.455 06:30:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:28.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.455 --rc genhtml_branch_coverage=1 00:05:28.455 --rc genhtml_function_coverage=1 00:05:28.455 --rc genhtml_legend=1 00:05:28.455 --rc geninfo_all_blocks=1 00:05:28.455 --rc geninfo_unexecuted_blocks=1 00:05:28.455 00:05:28.455 ' 00:05:28.455 06:30:38 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:28.455 06:30:38 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:28.455 06:30:38 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:28.456 06:30:38 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:28.456 06:30:38 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:28.456 06:30:38 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:28.456 06:30:38 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:28.456 06:30:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.456 06:30:38 -- common/autotest_common.sh@10 -- # set +x 00:05:28.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.456 06:30:38 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=68686 00:05:28.456 06:30:38 -- spdkcli/tcp.sh@27 -- # waitforlisten 68686 00:05:28.456 06:30:38 -- common/autotest_common.sh@829 -- # '[' -z 68686 ']' 00:05:28.456 06:30:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.456 06:30:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.456 06:30:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.456 06:30:38 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:28.456 06:30:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.456 06:30:38 -- common/autotest_common.sh@10 -- # set +x 00:05:28.456 [2024-11-28 06:30:39.056230] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:28.456 [2024-11-28 06:30:39.056344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68686 ] 00:05:28.456 [2024-11-28 06:30:39.193416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.716 [2024-11-28 06:30:39.234032] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:28.716 [2024-11-28 06:30:39.234463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.716 [2024-11-28 06:30:39.234526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.289 06:30:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.289 06:30:39 -- common/autotest_common.sh@862 -- # return 0 00:05:29.289 06:30:39 -- spdkcli/tcp.sh@31 -- # socat_pid=68703 00:05:29.289 06:30:39 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:29.289 06:30:39 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:29.289 [ 00:05:29.289 "bdev_malloc_delete", 00:05:29.289 "bdev_malloc_create", 00:05:29.289 "bdev_null_resize", 00:05:29.289 "bdev_null_delete", 00:05:29.289 "bdev_null_create", 00:05:29.289 "bdev_nvme_cuse_unregister", 00:05:29.289 "bdev_nvme_cuse_register", 00:05:29.289 "bdev_opal_new_user", 00:05:29.289 "bdev_opal_set_lock_state", 00:05:29.289 "bdev_opal_delete", 00:05:29.289 "bdev_opal_get_info", 00:05:29.289 "bdev_opal_create", 00:05:29.289 "bdev_nvme_opal_revert", 00:05:29.289 "bdev_nvme_opal_init", 00:05:29.289 "bdev_nvme_send_cmd", 00:05:29.289 "bdev_nvme_get_path_iostat", 00:05:29.289 "bdev_nvme_get_mdns_discovery_info", 00:05:29.289 "bdev_nvme_stop_mdns_discovery", 00:05:29.289 "bdev_nvme_start_mdns_discovery", 00:05:29.289 "bdev_nvme_set_multipath_policy", 00:05:29.289 "bdev_nvme_set_preferred_path", 00:05:29.289 "bdev_nvme_get_io_paths", 00:05:29.289 "bdev_nvme_remove_error_injection", 00:05:29.289 "bdev_nvme_add_error_injection", 00:05:29.289 "bdev_nvme_get_discovery_info", 00:05:29.289 "bdev_nvme_stop_discovery", 00:05:29.289 "bdev_nvme_start_discovery", 00:05:29.290 "bdev_nvme_get_controller_health_info", 00:05:29.290 "bdev_nvme_disable_controller", 00:05:29.290 "bdev_nvme_enable_controller", 00:05:29.290 "bdev_nvme_reset_controller", 00:05:29.290 "bdev_nvme_get_transport_statistics", 00:05:29.290 "bdev_nvme_apply_firmware", 00:05:29.290 "bdev_nvme_detach_controller", 00:05:29.290 "bdev_nvme_get_controllers", 00:05:29.290 "bdev_nvme_attach_controller", 00:05:29.290 "bdev_nvme_set_hotplug", 00:05:29.290 "bdev_nvme_set_options", 00:05:29.290 "bdev_passthru_delete", 00:05:29.290 "bdev_passthru_create", 00:05:29.290 "bdev_lvol_grow_lvstore", 00:05:29.290 "bdev_lvol_get_lvols", 00:05:29.290 "bdev_lvol_get_lvstores", 00:05:29.290 "bdev_lvol_delete", 00:05:29.290 "bdev_lvol_set_read_only", 00:05:29.290 "bdev_lvol_resize", 00:05:29.290 "bdev_lvol_decouple_parent", 00:05:29.290 "bdev_lvol_inflate", 00:05:29.290 "bdev_lvol_rename", 00:05:29.290 "bdev_lvol_clone_bdev", 00:05:29.290 "bdev_lvol_clone", 00:05:29.290 "bdev_lvol_snapshot", 00:05:29.290 "bdev_lvol_create", 00:05:29.290 "bdev_lvol_delete_lvstore", 00:05:29.290 "bdev_lvol_rename_lvstore", 00:05:29.290 "bdev_lvol_create_lvstore", 00:05:29.290 "bdev_raid_set_options", 00:05:29.290 "bdev_raid_remove_base_bdev", 00:05:29.290 "bdev_raid_add_base_bdev", 00:05:29.290 "bdev_raid_delete", 00:05:29.290 "bdev_raid_create", 00:05:29.290 "bdev_raid_get_bdevs", 00:05:29.290 "bdev_error_inject_error", 00:05:29.290 "bdev_error_delete", 00:05:29.290 "bdev_error_create", 00:05:29.290 "bdev_split_delete", 00:05:29.290 "bdev_split_create", 00:05:29.290 "bdev_delay_delete", 00:05:29.290 "bdev_delay_create", 00:05:29.290 "bdev_delay_update_latency", 00:05:29.290 "bdev_zone_block_delete", 00:05:29.290 "bdev_zone_block_create", 00:05:29.290 "blobfs_create", 00:05:29.290 "blobfs_detect", 00:05:29.290 "blobfs_set_cache_size", 00:05:29.290 "bdev_xnvme_delete", 00:05:29.290 "bdev_xnvme_create", 00:05:29.290 "bdev_aio_delete", 00:05:29.290 "bdev_aio_rescan", 00:05:29.290 "bdev_aio_create", 00:05:29.290 "bdev_ftl_set_property", 00:05:29.290 "bdev_ftl_get_properties", 00:05:29.290 "bdev_ftl_get_stats", 00:05:29.290 "bdev_ftl_unmap", 00:05:29.290 "bdev_ftl_unload", 00:05:29.290 "bdev_ftl_delete", 00:05:29.290 "bdev_ftl_load", 00:05:29.290 "bdev_ftl_create", 00:05:29.290 "bdev_virtio_attach_controller", 00:05:29.290 "bdev_virtio_scsi_get_devices", 00:05:29.290 "bdev_virtio_detach_controller", 00:05:29.290 "bdev_virtio_blk_set_hotplug", 00:05:29.290 "bdev_iscsi_delete", 00:05:29.290 "bdev_iscsi_create", 00:05:29.290 "bdev_iscsi_set_options", 00:05:29.290 "accel_error_inject_error", 00:05:29.290 "ioat_scan_accel_module", 00:05:29.290 "dsa_scan_accel_module", 00:05:29.290 "iaa_scan_accel_module", 00:05:29.290 "iscsi_set_options", 00:05:29.290 "iscsi_get_auth_groups", 00:05:29.290 "iscsi_auth_group_remove_secret", 00:05:29.290 "iscsi_auth_group_add_secret", 00:05:29.290 "iscsi_delete_auth_group", 00:05:29.290 "iscsi_create_auth_group", 00:05:29.290 "iscsi_set_discovery_auth", 00:05:29.290 "iscsi_get_options", 00:05:29.290 "iscsi_target_node_request_logout", 00:05:29.290 "iscsi_target_node_set_redirect", 00:05:29.290 "iscsi_target_node_set_auth", 00:05:29.290 "iscsi_target_node_add_lun", 00:05:29.290 "iscsi_get_connections", 00:05:29.290 "iscsi_portal_group_set_auth", 00:05:29.290 "iscsi_start_portal_group", 00:05:29.290 "iscsi_delete_portal_group", 00:05:29.290 "iscsi_create_portal_group", 00:05:29.290 "iscsi_get_portal_groups", 00:05:29.290 "iscsi_delete_target_node", 00:05:29.290 "iscsi_target_node_remove_pg_ig_maps", 00:05:29.290 "iscsi_target_node_add_pg_ig_maps", 00:05:29.290 "iscsi_create_target_node", 00:05:29.290 "iscsi_get_target_nodes", 00:05:29.290 "iscsi_delete_initiator_group", 00:05:29.290 "iscsi_initiator_group_remove_initiators", 00:05:29.290 "iscsi_initiator_group_add_initiators", 00:05:29.290 "iscsi_create_initiator_group", 00:05:29.290 "iscsi_get_initiator_groups", 00:05:29.290 "nvmf_set_crdt", 00:05:29.290 "nvmf_set_config", 00:05:29.290 "nvmf_set_max_subsystems", 00:05:29.290 "nvmf_subsystem_get_listeners", 00:05:29.290 "nvmf_subsystem_get_qpairs", 00:05:29.290 "nvmf_subsystem_get_controllers", 00:05:29.290 "nvmf_get_stats", 00:05:29.290 "nvmf_get_transports", 00:05:29.290 "nvmf_create_transport", 00:05:29.290 "nvmf_get_targets", 00:05:29.290 "nvmf_delete_target", 00:05:29.290 "nvmf_create_target", 00:05:29.290 "nvmf_subsystem_allow_any_host", 00:05:29.290 "nvmf_subsystem_remove_host", 00:05:29.290 "nvmf_subsystem_add_host", 00:05:29.290 "nvmf_subsystem_remove_ns", 00:05:29.290 "nvmf_subsystem_add_ns", 00:05:29.290 "nvmf_subsystem_listener_set_ana_state", 00:05:29.290 "nvmf_discovery_get_referrals", 00:05:29.290 "nvmf_discovery_remove_referral", 00:05:29.290 "nvmf_discovery_add_referral", 00:05:29.290 "nvmf_subsystem_remove_listener", 00:05:29.290 "nvmf_subsystem_add_listener", 00:05:29.290 "nvmf_delete_subsystem", 00:05:29.290 "nvmf_create_subsystem", 00:05:29.290 "nvmf_get_subsystems", 00:05:29.290 "env_dpdk_get_mem_stats", 00:05:29.290 "nbd_get_disks", 00:05:29.290 "nbd_stop_disk", 00:05:29.290 "nbd_start_disk", 00:05:29.290 "ublk_recover_disk", 00:05:29.290 "ublk_get_disks", 00:05:29.290 "ublk_stop_disk", 00:05:29.290 "ublk_start_disk", 00:05:29.290 "ublk_destroy_target", 00:05:29.290 "ublk_create_target", 00:05:29.290 "virtio_blk_create_transport", 00:05:29.290 "virtio_blk_get_transports", 00:05:29.290 "vhost_controller_set_coalescing", 00:05:29.290 "vhost_get_controllers", 00:05:29.290 "vhost_delete_controller", 00:05:29.290 "vhost_create_blk_controller", 00:05:29.290 "vhost_scsi_controller_remove_target", 00:05:29.290 "vhost_scsi_controller_add_target", 00:05:29.290 "vhost_start_scsi_controller", 00:05:29.290 "vhost_create_scsi_controller", 00:05:29.290 "thread_set_cpumask", 00:05:29.290 "framework_get_scheduler", 00:05:29.290 "framework_set_scheduler", 00:05:29.290 "framework_get_reactors", 00:05:29.290 "thread_get_io_channels", 00:05:29.290 "thread_get_pollers", 00:05:29.290 "thread_get_stats", 00:05:29.290 "framework_monitor_context_switch", 00:05:29.290 "spdk_kill_instance", 00:05:29.290 "log_enable_timestamps", 00:05:29.290 "log_get_flags", 00:05:29.290 "log_clear_flag", 00:05:29.290 "log_set_flag", 00:05:29.290 "log_get_level", 00:05:29.290 "log_set_level", 00:05:29.290 "log_get_print_level", 00:05:29.290 "log_set_print_level", 00:05:29.290 "framework_enable_cpumask_locks", 00:05:29.290 "framework_disable_cpumask_locks", 00:05:29.290 "framework_wait_init", 00:05:29.290 "framework_start_init", 00:05:29.290 "scsi_get_devices", 00:05:29.290 "bdev_get_histogram", 00:05:29.290 "bdev_enable_histogram", 00:05:29.290 "bdev_set_qos_limit", 00:05:29.290 "bdev_set_qd_sampling_period", 00:05:29.290 "bdev_get_bdevs", 00:05:29.290 "bdev_reset_iostat", 00:05:29.290 "bdev_get_iostat", 00:05:29.290 "bdev_examine", 00:05:29.290 "bdev_wait_for_examine", 00:05:29.290 "bdev_set_options", 00:05:29.290 "notify_get_notifications", 00:05:29.290 "notify_get_types", 00:05:29.290 "accel_get_stats", 00:05:29.290 "accel_set_options", 00:05:29.290 "accel_set_driver", 00:05:29.290 "accel_crypto_key_destroy", 00:05:29.290 "accel_crypto_keys_get", 00:05:29.290 "accel_crypto_key_create", 00:05:29.290 "accel_assign_opc", 00:05:29.290 "accel_get_module_info", 00:05:29.290 "accel_get_opc_assignments", 00:05:29.290 "vmd_rescan", 00:05:29.290 "vmd_remove_device", 00:05:29.290 "vmd_enable", 00:05:29.290 "sock_set_default_impl", 00:05:29.290 "sock_impl_set_options", 00:05:29.290 "sock_impl_get_options", 00:05:29.290 "iobuf_get_stats", 00:05:29.290 "iobuf_set_options", 00:05:29.290 "framework_get_pci_devices", 00:05:29.290 "framework_get_config", 00:05:29.290 "framework_get_subsystems", 00:05:29.290 "trace_get_info", 00:05:29.290 "trace_get_tpoint_group_mask", 00:05:29.290 "trace_disable_tpoint_group", 00:05:29.290 "trace_enable_tpoint_group", 00:05:29.290 "trace_clear_tpoint_mask", 00:05:29.290 "trace_set_tpoint_mask", 00:05:29.290 "spdk_get_version", 00:05:29.290 "rpc_get_methods" 00:05:29.290 ] 00:05:29.551 06:30:40 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:29.551 06:30:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:29.551 06:30:40 -- common/autotest_common.sh@10 -- # set +x 00:05:29.551 06:30:40 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:29.551 06:30:40 -- spdkcli/tcp.sh@38 -- # killprocess 68686 00:05:29.551 06:30:40 -- common/autotest_common.sh@936 -- # '[' -z 68686 ']' 00:05:29.551 06:30:40 -- common/autotest_common.sh@940 -- # kill -0 68686 00:05:29.551 06:30:40 -- common/autotest_common.sh@941 -- # uname 00:05:29.551 06:30:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:29.551 06:30:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68686 00:05:29.551 killing process with pid 68686 00:05:29.551 06:30:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:29.551 06:30:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:29.551 06:30:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68686' 00:05:29.551 06:30:40 -- common/autotest_common.sh@955 -- # kill 68686 00:05:29.551 06:30:40 -- common/autotest_common.sh@960 -- # wait 68686 00:05:29.812 ************************************ 00:05:29.812 END TEST spdkcli_tcp 00:05:29.812 ************************************ 00:05:29.812 00:05:29.812 real 0m1.606s 00:05:29.812 user 0m2.760s 00:05:29.812 sys 0m0.424s 00:05:29.812 06:30:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:29.812 06:30:40 -- common/autotest_common.sh@10 -- # set +x 00:05:29.812 06:30:40 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:29.812 06:30:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:29.812 06:30:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.812 06:30:40 -- common/autotest_common.sh@10 -- # set +x 00:05:29.812 ************************************ 00:05:29.812 START TEST dpdk_mem_utility 00:05:29.812 ************************************ 00:05:29.812 06:30:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:29.812 * Looking for test storage... 00:05:29.812 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:29.812 06:30:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:29.812 06:30:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:29.812 06:30:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:30.074 06:30:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:30.074 06:30:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:30.074 06:30:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:30.074 06:30:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:30.074 06:30:40 -- scripts/common.sh@335 -- # IFS=.-: 00:05:30.074 06:30:40 -- scripts/common.sh@335 -- # read -ra ver1 00:05:30.074 06:30:40 -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.074 06:30:40 -- scripts/common.sh@336 -- # read -ra ver2 00:05:30.074 06:30:40 -- scripts/common.sh@337 -- # local 'op=<' 00:05:30.074 06:30:40 -- scripts/common.sh@339 -- # ver1_l=2 00:05:30.074 06:30:40 -- scripts/common.sh@340 -- # ver2_l=1 00:05:30.074 06:30:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:30.074 06:30:40 -- scripts/common.sh@343 -- # case "$op" in 00:05:30.074 06:30:40 -- scripts/common.sh@344 -- # : 1 00:05:30.074 06:30:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:30.074 06:30:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.074 06:30:40 -- scripts/common.sh@364 -- # decimal 1 00:05:30.074 06:30:40 -- scripts/common.sh@352 -- # local d=1 00:05:30.074 06:30:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.074 06:30:40 -- scripts/common.sh@354 -- # echo 1 00:05:30.074 06:30:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:30.074 06:30:40 -- scripts/common.sh@365 -- # decimal 2 00:05:30.074 06:30:40 -- scripts/common.sh@352 -- # local d=2 00:05:30.074 06:30:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.074 06:30:40 -- scripts/common.sh@354 -- # echo 2 00:05:30.074 06:30:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:30.074 06:30:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:30.074 06:30:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:30.074 06:30:40 -- scripts/common.sh@367 -- # return 0 00:05:30.074 06:30:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.074 06:30:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:30.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.074 --rc genhtml_branch_coverage=1 00:05:30.074 --rc genhtml_function_coverage=1 00:05:30.074 --rc genhtml_legend=1 00:05:30.074 --rc geninfo_all_blocks=1 00:05:30.074 --rc geninfo_unexecuted_blocks=1 00:05:30.074 00:05:30.074 ' 00:05:30.074 06:30:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:30.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.074 --rc genhtml_branch_coverage=1 00:05:30.074 --rc genhtml_function_coverage=1 00:05:30.074 --rc genhtml_legend=1 00:05:30.074 --rc geninfo_all_blocks=1 00:05:30.074 --rc geninfo_unexecuted_blocks=1 00:05:30.074 00:05:30.074 ' 00:05:30.074 06:30:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:30.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.074 --rc genhtml_branch_coverage=1 00:05:30.074 --rc genhtml_function_coverage=1 00:05:30.074 --rc genhtml_legend=1 00:05:30.074 --rc geninfo_all_blocks=1 00:05:30.074 --rc geninfo_unexecuted_blocks=1 00:05:30.074 00:05:30.074 ' 00:05:30.074 06:30:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:30.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.074 --rc genhtml_branch_coverage=1 00:05:30.074 --rc genhtml_function_coverage=1 00:05:30.074 --rc genhtml_legend=1 00:05:30.074 --rc geninfo_all_blocks=1 00:05:30.074 --rc geninfo_unexecuted_blocks=1 00:05:30.074 00:05:30.074 ' 00:05:30.074 06:30:40 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:30.074 06:30:40 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=68774 00:05:30.074 06:30:40 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 68774 00:05:30.074 06:30:40 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.074 06:30:40 -- common/autotest_common.sh@829 -- # '[' -z 68774 ']' 00:05:30.074 06:30:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.074 06:30:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.074 06:30:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.074 06:30:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.074 06:30:40 -- common/autotest_common.sh@10 -- # set +x 00:05:30.074 [2024-11-28 06:30:40.685616] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:30.074 [2024-11-28 06:30:40.685727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68774 ] 00:05:30.074 [2024-11-28 06:30:40.821074] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.336 [2024-11-28 06:30:40.866277] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:30.336 [2024-11-28 06:30:40.866684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.910 06:30:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.910 06:30:41 -- common/autotest_common.sh@862 -- # return 0 00:05:30.910 06:30:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:30.910 06:30:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:30.910 06:30:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.910 06:30:41 -- common/autotest_common.sh@10 -- # set +x 00:05:30.910 { 00:05:30.910 "filename": "/tmp/spdk_mem_dump.txt" 00:05:30.910 } 00:05:30.910 06:30:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.910 06:30:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:30.910 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:30.910 1 heaps totaling size 814.000000 MiB 00:05:30.910 size: 814.000000 MiB heap id: 0 00:05:30.910 end heaps---------- 00:05:30.910 8 mempools totaling size 598.116089 MiB 00:05:30.910 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:30.910 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:30.910 size: 84.521057 MiB name: bdev_io_68774 00:05:30.910 size: 51.011292 MiB name: evtpool_68774 00:05:30.910 size: 50.003479 MiB name: msgpool_68774 00:05:30.910 size: 21.763794 MiB name: PDU_Pool 00:05:30.910 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:30.910 size: 0.026123 MiB name: Session_Pool 00:05:30.910 end mempools------- 00:05:30.910 6 memzones totaling size 4.142822 MiB 00:05:30.910 size: 1.000366 MiB name: RG_ring_0_68774 00:05:30.910 size: 1.000366 MiB name: RG_ring_1_68774 00:05:30.910 size: 1.000366 MiB name: RG_ring_4_68774 00:05:30.910 size: 1.000366 MiB name: RG_ring_5_68774 00:05:30.910 size: 0.125366 MiB name: RG_ring_2_68774 00:05:30.910 size: 0.015991 MiB name: RG_ring_3_68774 00:05:30.910 end memzones------- 00:05:30.910 06:30:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:30.910 heap id: 0 total size: 814.000000 MiB number of busy elements: 312 number of free elements: 15 00:05:30.910 list of free elements. size: 12.469727 MiB 00:05:30.910 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:30.910 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:30.910 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:30.910 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:30.910 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:30.910 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:30.910 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:30.910 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:30.910 element at address: 0x200000200000 with size: 0.832825 MiB 00:05:30.911 element at address: 0x20001aa00000 with size: 0.567505 MiB 00:05:30.911 element at address: 0x20000b200000 with size: 0.488892 MiB 00:05:30.911 element at address: 0x200000800000 with size: 0.486145 MiB 00:05:30.911 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:30.911 element at address: 0x200027e00000 with size: 0.395752 MiB 00:05:30.911 element at address: 0x200003a00000 with size: 0.347839 MiB 00:05:30.911 list of standard malloc elements. size: 199.267700 MiB 00:05:30.911 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:30.911 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:30.911 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:30.911 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:30.911 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:30.911 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:30.911 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:30.911 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:30.911 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:30.911 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:30.911 element at address: 0x20000087c740 with size: 0.000183 MiB 00:05:30.911 element at address: 0x20000087c800 with size: 0.000183 MiB 00:05:30.911 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x20000087c980 with size: 0.000183 MiB 00:05:30.911 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:30.911 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:30.911 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:30.911 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:30.911 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:30.911 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a59180 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a59240 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a59300 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a59480 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a59540 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a59600 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a59780 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a59840 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a59900 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:30.911 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:30.911 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:05:30.911 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:05:30.911 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:05:30.911 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:30.911 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:30.911 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:30.911 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:30.911 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:30.911 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:30.911 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:30.911 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:30.911 element at address: 0x20001aa91480 with size: 0.000183 MiB 00:05:30.911 element at address: 0x20001aa91540 with size: 0.000183 MiB 00:05:30.911 element at address: 0x20001aa91600 with size: 0.000183 MiB 00:05:30.911 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:30.912 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e65500 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:30.912 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:30.913 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:30.913 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:30.913 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:30.913 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:30.913 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:30.913 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:30.913 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:30.913 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:30.913 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:30.913 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:30.913 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:30.913 list of memzone associated elements. size: 602.262573 MiB 00:05:30.913 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:30.913 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:30.913 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:30.913 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:30.913 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:30.913 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_68774_0 00:05:30.913 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:30.913 associated memzone info: size: 48.002930 MiB name: MP_evtpool_68774_0 00:05:30.913 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:30.913 associated memzone info: size: 48.002930 MiB name: MP_msgpool_68774_0 00:05:30.913 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:30.913 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:30.913 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:30.913 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:30.913 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:30.913 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_68774 00:05:30.913 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:30.913 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_68774 00:05:30.913 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:30.913 associated memzone info: size: 1.007996 MiB name: MP_evtpool_68774 00:05:30.913 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:30.913 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:30.913 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:30.913 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:30.913 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:30.913 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:30.913 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:30.913 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:30.913 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:30.913 associated memzone info: size: 1.000366 MiB name: RG_ring_0_68774 00:05:30.913 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:30.913 associated memzone info: size: 1.000366 MiB name: RG_ring_1_68774 00:05:30.913 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:30.913 associated memzone info: size: 1.000366 MiB name: RG_ring_4_68774 00:05:30.913 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:30.913 associated memzone info: size: 1.000366 MiB name: RG_ring_5_68774 00:05:30.913 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:30.913 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_68774 00:05:30.913 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:30.913 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:30.913 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:30.913 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:30.913 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:30.913 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:30.913 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:30.913 associated memzone info: size: 0.125366 MiB name: RG_ring_2_68774 00:05:30.913 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:30.913 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:30.913 element at address: 0x200027e65680 with size: 0.023743 MiB 00:05:30.913 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:30.913 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:30.913 associated memzone info: size: 0.015991 MiB name: RG_ring_3_68774 00:05:30.913 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:05:30.913 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:30.913 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:30.913 associated memzone info: size: 0.000183 MiB name: MP_msgpool_68774 00:05:30.913 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:30.913 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_68774 00:05:30.913 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:05:30.913 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:30.913 06:30:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:30.913 06:30:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 68774 00:05:30.913 06:30:41 -- common/autotest_common.sh@936 -- # '[' -z 68774 ']' 00:05:30.913 06:30:41 -- common/autotest_common.sh@940 -- # kill -0 68774 00:05:30.913 06:30:41 -- common/autotest_common.sh@941 -- # uname 00:05:30.913 06:30:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:30.913 06:30:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68774 00:05:30.913 killing process with pid 68774 00:05:30.913 06:30:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:30.913 06:30:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:30.913 06:30:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68774' 00:05:30.913 06:30:41 -- common/autotest_common.sh@955 -- # kill 68774 00:05:30.913 06:30:41 -- common/autotest_common.sh@960 -- # wait 68774 00:05:31.511 00:05:31.511 real 0m1.563s 00:05:31.511 user 0m1.466s 00:05:31.511 sys 0m0.459s 00:05:31.511 ************************************ 00:05:31.511 END TEST dpdk_mem_utility 00:05:31.511 ************************************ 00:05:31.511 06:30:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:31.511 06:30:42 -- common/autotest_common.sh@10 -- # set +x 00:05:31.511 06:30:42 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:31.511 06:30:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:31.511 06:30:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.511 06:30:42 -- common/autotest_common.sh@10 -- # set +x 00:05:31.511 ************************************ 00:05:31.511 START TEST event 00:05:31.511 ************************************ 00:05:31.511 06:30:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:31.511 * Looking for test storage... 00:05:31.511 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:31.511 06:30:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:31.511 06:30:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:31.511 06:30:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:31.511 06:30:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:31.511 06:30:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:31.511 06:30:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:31.511 06:30:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:31.511 06:30:42 -- scripts/common.sh@335 -- # IFS=.-: 00:05:31.511 06:30:42 -- scripts/common.sh@335 -- # read -ra ver1 00:05:31.511 06:30:42 -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.511 06:30:42 -- scripts/common.sh@336 -- # read -ra ver2 00:05:31.511 06:30:42 -- scripts/common.sh@337 -- # local 'op=<' 00:05:31.511 06:30:42 -- scripts/common.sh@339 -- # ver1_l=2 00:05:31.511 06:30:42 -- scripts/common.sh@340 -- # ver2_l=1 00:05:31.511 06:30:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:31.511 06:30:42 -- scripts/common.sh@343 -- # case "$op" in 00:05:31.511 06:30:42 -- scripts/common.sh@344 -- # : 1 00:05:31.511 06:30:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:31.511 06:30:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.511 06:30:42 -- scripts/common.sh@364 -- # decimal 1 00:05:31.511 06:30:42 -- scripts/common.sh@352 -- # local d=1 00:05:31.511 06:30:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.511 06:30:42 -- scripts/common.sh@354 -- # echo 1 00:05:31.511 06:30:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:31.511 06:30:42 -- scripts/common.sh@365 -- # decimal 2 00:05:31.511 06:30:42 -- scripts/common.sh@352 -- # local d=2 00:05:31.511 06:30:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.511 06:30:42 -- scripts/common.sh@354 -- # echo 2 00:05:31.511 06:30:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:31.511 06:30:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:31.511 06:30:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:31.511 06:30:42 -- scripts/common.sh@367 -- # return 0 00:05:31.511 06:30:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.511 06:30:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:31.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.511 --rc genhtml_branch_coverage=1 00:05:31.511 --rc genhtml_function_coverage=1 00:05:31.511 --rc genhtml_legend=1 00:05:31.511 --rc geninfo_all_blocks=1 00:05:31.511 --rc geninfo_unexecuted_blocks=1 00:05:31.511 00:05:31.511 ' 00:05:31.511 06:30:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:31.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.511 --rc genhtml_branch_coverage=1 00:05:31.511 --rc genhtml_function_coverage=1 00:05:31.511 --rc genhtml_legend=1 00:05:31.511 --rc geninfo_all_blocks=1 00:05:31.511 --rc geninfo_unexecuted_blocks=1 00:05:31.511 00:05:31.511 ' 00:05:31.511 06:30:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:31.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.511 --rc genhtml_branch_coverage=1 00:05:31.511 --rc genhtml_function_coverage=1 00:05:31.511 --rc genhtml_legend=1 00:05:31.511 --rc geninfo_all_blocks=1 00:05:31.511 --rc geninfo_unexecuted_blocks=1 00:05:31.511 00:05:31.511 ' 00:05:31.511 06:30:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:31.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.511 --rc genhtml_branch_coverage=1 00:05:31.511 --rc genhtml_function_coverage=1 00:05:31.511 --rc genhtml_legend=1 00:05:31.511 --rc geninfo_all_blocks=1 00:05:31.511 --rc geninfo_unexecuted_blocks=1 00:05:31.511 00:05:31.511 ' 00:05:31.511 06:30:42 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:31.511 06:30:42 -- bdev/nbd_common.sh@6 -- # set -e 00:05:31.511 06:30:42 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:31.511 06:30:42 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:31.511 06:30:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.511 06:30:42 -- common/autotest_common.sh@10 -- # set +x 00:05:31.511 ************************************ 00:05:31.511 START TEST event_perf 00:05:31.511 ************************************ 00:05:31.511 06:30:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:31.771 Running I/O for 1 seconds...[2024-11-28 06:30:42.305558] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:31.771 [2024-11-28 06:30:42.306313] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68859 ] 00:05:31.771 [2024-11-28 06:30:42.445469] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:31.771 [2024-11-28 06:30:42.494319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.771 [2024-11-28 06:30:42.494464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.771 [2024-11-28 06:30:42.494785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.771 [2024-11-28 06:30:42.494856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:33.147 Running I/O for 1 seconds... 00:05:33.147 lcore 0: 146832 00:05:33.147 lcore 1: 146834 00:05:33.147 lcore 2: 146832 00:05:33.147 lcore 3: 146833 00:05:33.147 done. 00:05:33.147 00:05:33.147 real 0m1.294s 00:05:33.147 user 0m4.088s 00:05:33.147 sys 0m0.085s 00:05:33.147 06:30:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.147 ************************************ 00:05:33.147 END TEST event_perf 00:05:33.147 ************************************ 00:05:33.147 06:30:43 -- common/autotest_common.sh@10 -- # set +x 00:05:33.147 06:30:43 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:33.147 06:30:43 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:33.147 06:30:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.147 06:30:43 -- common/autotest_common.sh@10 -- # set +x 00:05:33.147 ************************************ 00:05:33.147 START TEST event_reactor 00:05:33.147 ************************************ 00:05:33.147 06:30:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:33.147 [2024-11-28 06:30:43.650097] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:33.147 [2024-11-28 06:30:43.650207] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68893 ] 00:05:33.147 [2024-11-28 06:30:43.785217] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.147 [2024-11-28 06:30:43.825774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.546 test_start 00:05:34.546 oneshot 00:05:34.546 tick 100 00:05:34.546 tick 100 00:05:34.546 tick 250 00:05:34.546 tick 100 00:05:34.546 tick 100 00:05:34.546 tick 100 00:05:34.546 tick 250 00:05:34.546 tick 500 00:05:34.546 tick 100 00:05:34.546 tick 100 00:05:34.546 tick 250 00:05:34.546 tick 100 00:05:34.546 tick 100 00:05:34.546 test_end 00:05:34.546 ************************************ 00:05:34.546 END TEST event_reactor 00:05:34.546 ************************************ 00:05:34.546 00:05:34.546 real 0m1.316s 00:05:34.546 user 0m1.138s 00:05:34.546 sys 0m0.067s 00:05:34.546 06:30:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:34.546 06:30:44 -- common/autotest_common.sh@10 -- # set +x 00:05:34.546 06:30:44 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:34.546 06:30:44 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:34.546 06:30:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.546 06:30:44 -- common/autotest_common.sh@10 -- # set +x 00:05:34.546 ************************************ 00:05:34.546 START TEST event_reactor_perf 00:05:34.546 ************************************ 00:05:34.547 06:30:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:34.547 [2024-11-28 06:30:45.030121] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:34.547 [2024-11-28 06:30:45.030270] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68935 ] 00:05:34.547 [2024-11-28 06:30:45.165275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.547 [2024-11-28 06:30:45.234073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.931 test_start 00:05:35.931 test_end 00:05:35.931 Performance: 304825 events per second 00:05:35.931 00:05:35.931 real 0m1.302s 00:05:35.931 user 0m1.111s 00:05:35.931 sys 0m0.082s 00:05:35.931 06:30:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:35.931 ************************************ 00:05:35.931 06:30:46 -- common/autotest_common.sh@10 -- # set +x 00:05:35.931 END TEST event_reactor_perf 00:05:35.931 ************************************ 00:05:35.931 06:30:46 -- event/event.sh@49 -- # uname -s 00:05:35.931 06:30:46 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:35.931 06:30:46 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:35.931 06:30:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:35.931 06:30:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.931 06:30:46 -- common/autotest_common.sh@10 -- # set +x 00:05:35.931 ************************************ 00:05:35.931 START TEST event_scheduler 00:05:35.931 ************************************ 00:05:35.931 06:30:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:35.931 * Looking for test storage... 00:05:35.931 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:35.931 06:30:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:35.931 06:30:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:35.931 06:30:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:35.931 06:30:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:35.931 06:30:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:35.931 06:30:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:35.931 06:30:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:35.931 06:30:46 -- scripts/common.sh@335 -- # IFS=.-: 00:05:35.931 06:30:46 -- scripts/common.sh@335 -- # read -ra ver1 00:05:35.931 06:30:46 -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.931 06:30:46 -- scripts/common.sh@336 -- # read -ra ver2 00:05:35.931 06:30:46 -- scripts/common.sh@337 -- # local 'op=<' 00:05:35.931 06:30:46 -- scripts/common.sh@339 -- # ver1_l=2 00:05:35.931 06:30:46 -- scripts/common.sh@340 -- # ver2_l=1 00:05:35.931 06:30:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:35.931 06:30:46 -- scripts/common.sh@343 -- # case "$op" in 00:05:35.931 06:30:46 -- scripts/common.sh@344 -- # : 1 00:05:35.931 06:30:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:35.931 06:30:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.931 06:30:46 -- scripts/common.sh@364 -- # decimal 1 00:05:35.931 06:30:46 -- scripts/common.sh@352 -- # local d=1 00:05:35.931 06:30:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.931 06:30:46 -- scripts/common.sh@354 -- # echo 1 00:05:35.931 06:30:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:35.931 06:30:46 -- scripts/common.sh@365 -- # decimal 2 00:05:35.931 06:30:46 -- scripts/common.sh@352 -- # local d=2 00:05:35.931 06:30:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.931 06:30:46 -- scripts/common.sh@354 -- # echo 2 00:05:35.931 06:30:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:35.931 06:30:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:35.931 06:30:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:35.931 06:30:46 -- scripts/common.sh@367 -- # return 0 00:05:35.931 06:30:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.931 06:30:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:35.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.931 --rc genhtml_branch_coverage=1 00:05:35.931 --rc genhtml_function_coverage=1 00:05:35.931 --rc genhtml_legend=1 00:05:35.931 --rc geninfo_all_blocks=1 00:05:35.931 --rc geninfo_unexecuted_blocks=1 00:05:35.931 00:05:35.931 ' 00:05:35.931 06:30:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:35.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.931 --rc genhtml_branch_coverage=1 00:05:35.931 --rc genhtml_function_coverage=1 00:05:35.931 --rc genhtml_legend=1 00:05:35.931 --rc geninfo_all_blocks=1 00:05:35.931 --rc geninfo_unexecuted_blocks=1 00:05:35.931 00:05:35.931 ' 00:05:35.931 06:30:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:35.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.931 --rc genhtml_branch_coverage=1 00:05:35.931 --rc genhtml_function_coverage=1 00:05:35.931 --rc genhtml_legend=1 00:05:35.931 --rc geninfo_all_blocks=1 00:05:35.931 --rc geninfo_unexecuted_blocks=1 00:05:35.931 00:05:35.931 ' 00:05:35.931 06:30:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:35.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.931 --rc genhtml_branch_coverage=1 00:05:35.931 --rc genhtml_function_coverage=1 00:05:35.931 --rc genhtml_legend=1 00:05:35.931 --rc geninfo_all_blocks=1 00:05:35.931 --rc geninfo_unexecuted_blocks=1 00:05:35.931 00:05:35.931 ' 00:05:35.931 06:30:46 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:35.931 06:30:46 -- scheduler/scheduler.sh@35 -- # scheduler_pid=68999 00:05:35.931 06:30:46 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:35.931 06:30:46 -- scheduler/scheduler.sh@37 -- # waitforlisten 68999 00:05:35.931 06:30:46 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:35.931 06:30:46 -- common/autotest_common.sh@829 -- # '[' -z 68999 ']' 00:05:35.931 06:30:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.931 06:30:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.931 06:30:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.931 06:30:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.931 06:30:46 -- common/autotest_common.sh@10 -- # set +x 00:05:35.931 [2024-11-28 06:30:46.547508] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:35.931 [2024-11-28 06:30:46.547788] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68999 ] 00:05:35.931 [2024-11-28 06:30:46.684714] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:36.190 [2024-11-28 06:30:46.716851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.190 [2024-11-28 06:30:46.717073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.190 [2024-11-28 06:30:46.717178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.190 [2024-11-28 06:30:46.717212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:36.758 06:30:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.758 06:30:47 -- common/autotest_common.sh@862 -- # return 0 00:05:36.758 06:30:47 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:36.758 06:30:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.758 06:30:47 -- common/autotest_common.sh@10 -- # set +x 00:05:36.758 POWER: Env isn't set yet! 00:05:36.758 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:36.758 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:36.758 POWER: Cannot set governor of lcore 0 to userspace 00:05:36.758 POWER: Attempting to initialise PSTAT power management... 00:05:36.758 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:36.758 POWER: Cannot set governor of lcore 0 to performance 00:05:36.758 POWER: Attempting to initialise CPPC power management... 00:05:36.758 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:36.758 POWER: Cannot set governor of lcore 0 to userspace 00:05:36.758 POWER: Attempting to initialise VM power management... 00:05:36.758 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:36.758 POWER: Unable to set Power Management Environment for lcore 0 00:05:36.758 [2024-11-28 06:30:47.375596] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:36.758 [2024-11-28 06:30:47.375616] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:36.758 [2024-11-28 06:30:47.375625] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:36.758 [2024-11-28 06:30:47.375679] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:36.758 [2024-11-28 06:30:47.375688] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:36.758 [2024-11-28 06:30:47.375697] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:36.758 06:30:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.758 06:30:47 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:36.758 06:30:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.758 06:30:47 -- common/autotest_common.sh@10 -- # set +x 00:05:36.758 [2024-11-28 06:30:47.427257] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:36.758 06:30:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.758 06:30:47 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:36.758 06:30:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:36.758 06:30:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.758 06:30:47 -- common/autotest_common.sh@10 -- # set +x 00:05:36.758 ************************************ 00:05:36.758 START TEST scheduler_create_thread 00:05:36.758 ************************************ 00:05:36.758 06:30:47 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:05:36.758 06:30:47 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:36.758 06:30:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.758 06:30:47 -- common/autotest_common.sh@10 -- # set +x 00:05:36.758 2 00:05:36.758 06:30:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.758 06:30:47 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:36.758 06:30:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.758 06:30:47 -- common/autotest_common.sh@10 -- # set +x 00:05:36.758 3 00:05:36.758 06:30:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.758 06:30:47 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:36.758 06:30:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.758 06:30:47 -- common/autotest_common.sh@10 -- # set +x 00:05:36.758 4 00:05:36.758 06:30:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.758 06:30:47 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:36.758 06:30:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.758 06:30:47 -- common/autotest_common.sh@10 -- # set +x 00:05:36.758 5 00:05:36.758 06:30:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.758 06:30:47 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:36.758 06:30:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.758 06:30:47 -- common/autotest_common.sh@10 -- # set +x 00:05:36.758 6 00:05:36.758 06:30:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.758 06:30:47 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:36.758 06:30:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.758 06:30:47 -- common/autotest_common.sh@10 -- # set +x 00:05:36.758 7 00:05:36.758 06:30:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.758 06:30:47 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:36.758 06:30:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.758 06:30:47 -- common/autotest_common.sh@10 -- # set +x 00:05:36.758 8 00:05:36.758 06:30:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.758 06:30:47 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:36.759 06:30:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.759 06:30:47 -- common/autotest_common.sh@10 -- # set +x 00:05:36.759 9 00:05:36.759 06:30:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.759 06:30:47 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:36.759 06:30:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.759 06:30:47 -- common/autotest_common.sh@10 -- # set +x 00:05:36.759 10 00:05:36.759 06:30:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.759 06:30:47 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:36.759 06:30:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.759 06:30:47 -- common/autotest_common.sh@10 -- # set +x 00:05:36.759 06:30:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.759 06:30:47 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:36.759 06:30:47 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:36.759 06:30:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.759 06:30:47 -- common/autotest_common.sh@10 -- # set +x 00:05:37.017 06:30:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.017 06:30:47 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:37.017 06:30:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.017 06:30:47 -- common/autotest_common.sh@10 -- # set +x 00:05:37.017 06:30:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.017 06:30:47 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:37.017 06:30:47 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:37.017 06:30:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.017 06:30:47 -- common/autotest_common.sh@10 -- # set +x 00:05:37.275 ************************************ 00:05:37.275 END TEST scheduler_create_thread 00:05:37.275 ************************************ 00:05:37.275 06:30:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.275 00:05:37.275 real 0m0.589s 00:05:37.275 user 0m0.014s 00:05:37.275 sys 0m0.003s 00:05:37.275 06:30:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:37.275 06:30:48 -- common/autotest_common.sh@10 -- # set +x 00:05:37.533 06:30:48 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:37.534 06:30:48 -- scheduler/scheduler.sh@46 -- # killprocess 68999 00:05:37.534 06:30:48 -- common/autotest_common.sh@936 -- # '[' -z 68999 ']' 00:05:37.534 06:30:48 -- common/autotest_common.sh@940 -- # kill -0 68999 00:05:37.534 06:30:48 -- common/autotest_common.sh@941 -- # uname 00:05:37.534 06:30:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:37.534 06:30:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68999 00:05:37.534 killing process with pid 68999 00:05:37.534 06:30:48 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:37.534 06:30:48 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:37.534 06:30:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68999' 00:05:37.534 06:30:48 -- common/autotest_common.sh@955 -- # kill 68999 00:05:37.534 06:30:48 -- common/autotest_common.sh@960 -- # wait 68999 00:05:37.792 [2024-11-28 06:30:48.504937] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:38.051 ************************************ 00:05:38.051 END TEST event_scheduler 00:05:38.051 ************************************ 00:05:38.051 00:05:38.051 real 0m2.302s 00:05:38.051 user 0m4.447s 00:05:38.051 sys 0m0.328s 00:05:38.051 06:30:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:38.051 06:30:48 -- common/autotest_common.sh@10 -- # set +x 00:05:38.051 06:30:48 -- event/event.sh@51 -- # modprobe -n nbd 00:05:38.051 06:30:48 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:38.051 06:30:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.051 06:30:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.051 06:30:48 -- common/autotest_common.sh@10 -- # set +x 00:05:38.051 ************************************ 00:05:38.051 START TEST app_repeat 00:05:38.051 ************************************ 00:05:38.051 06:30:48 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:38.051 06:30:48 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.051 06:30:48 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.051 06:30:48 -- event/event.sh@13 -- # local nbd_list 00:05:38.051 06:30:48 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.051 06:30:48 -- event/event.sh@14 -- # local bdev_list 00:05:38.051 06:30:48 -- event/event.sh@15 -- # local repeat_times=4 00:05:38.051 06:30:48 -- event/event.sh@17 -- # modprobe nbd 00:05:38.051 Process app_repeat pid: 69072 00:05:38.051 spdk_app_start Round 0 00:05:38.051 06:30:48 -- event/event.sh@19 -- # repeat_pid=69072 00:05:38.051 06:30:48 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:38.051 06:30:48 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 69072' 00:05:38.051 06:30:48 -- event/event.sh@23 -- # for i in {0..2} 00:05:38.051 06:30:48 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:38.051 06:30:48 -- event/event.sh@25 -- # waitforlisten 69072 /var/tmp/spdk-nbd.sock 00:05:38.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.051 06:30:48 -- common/autotest_common.sh@829 -- # '[' -z 69072 ']' 00:05:38.051 06:30:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.051 06:30:48 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:38.051 06:30:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.051 06:30:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.051 06:30:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.051 06:30:48 -- common/autotest_common.sh@10 -- # set +x 00:05:38.051 [2024-11-28 06:30:48.741525] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:38.051 [2024-11-28 06:30:48.741634] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69072 ] 00:05:38.309 [2024-11-28 06:30:48.877361] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.309 [2024-11-28 06:30:48.919398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.309 [2024-11-28 06:30:48.919425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.875 06:30:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.875 06:30:49 -- common/autotest_common.sh@862 -- # return 0 00:05:38.875 06:30:49 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.133 Malloc0 00:05:39.133 06:30:49 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.391 Malloc1 00:05:39.391 06:30:49 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.391 06:30:49 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.391 06:30:49 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.391 06:30:49 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:39.391 06:30:49 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.391 06:30:49 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:39.391 06:30:49 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.391 06:30:49 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.391 06:30:49 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.391 06:30:49 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:39.391 06:30:49 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.391 06:30:49 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:39.391 06:30:49 -- bdev/nbd_common.sh@12 -- # local i 00:05:39.391 06:30:49 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:39.391 06:30:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.391 06:30:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:39.649 /dev/nbd0 00:05:39.649 06:30:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:39.649 06:30:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:39.649 06:30:50 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:39.649 06:30:50 -- common/autotest_common.sh@867 -- # local i 00:05:39.649 06:30:50 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:39.649 06:30:50 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:39.649 06:30:50 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:39.649 06:30:50 -- common/autotest_common.sh@871 -- # break 00:05:39.649 06:30:50 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:39.649 06:30:50 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:39.649 06:30:50 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.649 1+0 records in 00:05:39.649 1+0 records out 00:05:39.649 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198266 s, 20.7 MB/s 00:05:39.649 06:30:50 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.649 06:30:50 -- common/autotest_common.sh@884 -- # size=4096 00:05:39.649 06:30:50 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.649 06:30:50 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:39.649 06:30:50 -- common/autotest_common.sh@887 -- # return 0 00:05:39.649 06:30:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.649 06:30:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.649 06:30:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:39.649 /dev/nbd1 00:05:39.649 06:30:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:39.649 06:30:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:39.649 06:30:50 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:39.649 06:30:50 -- common/autotest_common.sh@867 -- # local i 00:05:39.649 06:30:50 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:39.649 06:30:50 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:39.649 06:30:50 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:39.649 06:30:50 -- common/autotest_common.sh@871 -- # break 00:05:39.649 06:30:50 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:39.649 06:30:50 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:39.649 06:30:50 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.649 1+0 records in 00:05:39.649 1+0 records out 00:05:39.649 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208374 s, 19.7 MB/s 00:05:39.909 06:30:50 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.909 06:30:50 -- common/autotest_common.sh@884 -- # size=4096 00:05:39.909 06:30:50 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.909 06:30:50 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:39.909 06:30:50 -- common/autotest_common.sh@887 -- # return 0 00:05:39.909 06:30:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.909 06:30:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.909 06:30:50 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.909 06:30:50 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.909 06:30:50 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.909 06:30:50 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:39.909 { 00:05:39.909 "nbd_device": "/dev/nbd0", 00:05:39.909 "bdev_name": "Malloc0" 00:05:39.909 }, 00:05:39.909 { 00:05:39.909 "nbd_device": "/dev/nbd1", 00:05:39.909 "bdev_name": "Malloc1" 00:05:39.909 } 00:05:39.909 ]' 00:05:39.909 06:30:50 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:39.909 { 00:05:39.909 "nbd_device": "/dev/nbd0", 00:05:39.909 "bdev_name": "Malloc0" 00:05:39.909 }, 00:05:39.909 { 00:05:39.909 "nbd_device": "/dev/nbd1", 00:05:39.909 "bdev_name": "Malloc1" 00:05:39.909 } 00:05:39.909 ]' 00:05:39.909 06:30:50 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.909 06:30:50 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:39.909 /dev/nbd1' 00:05:39.909 06:30:50 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:39.909 /dev/nbd1' 00:05:39.909 06:30:50 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.909 06:30:50 -- bdev/nbd_common.sh@65 -- # count=2 00:05:39.909 06:30:50 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:39.909 06:30:50 -- bdev/nbd_common.sh@95 -- # count=2 00:05:39.909 06:30:50 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:39.909 06:30:50 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:39.909 06:30:50 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.909 06:30:50 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.909 06:30:50 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:39.909 06:30:50 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.909 06:30:50 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:39.909 06:30:50 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:39.909 256+0 records in 00:05:39.909 256+0 records out 00:05:39.909 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00805097 s, 130 MB/s 00:05:39.909 06:30:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.909 06:30:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:40.168 256+0 records in 00:05:40.168 256+0 records out 00:05:40.168 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193562 s, 54.2 MB/s 00:05:40.168 06:30:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.168 06:30:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:40.168 256+0 records in 00:05:40.168 256+0 records out 00:05:40.168 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0189485 s, 55.3 MB/s 00:05:40.168 06:30:50 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:40.168 06:30:50 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.168 06:30:50 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.168 06:30:50 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:40.168 06:30:50 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.168 06:30:50 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:40.168 06:30:50 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:40.168 06:30:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.168 06:30:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:40.168 06:30:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.168 06:30:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:40.168 06:30:50 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.168 06:30:50 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:40.168 06:30:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.168 06:30:50 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.168 06:30:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:40.168 06:30:50 -- bdev/nbd_common.sh@51 -- # local i 00:05:40.168 06:30:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.168 06:30:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:40.168 06:30:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:40.168 06:30:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:40.168 06:30:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:40.168 06:30:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.168 06:30:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.168 06:30:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:40.168 06:30:50 -- bdev/nbd_common.sh@41 -- # break 00:05:40.168 06:30:50 -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.168 06:30:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.168 06:30:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:40.426 06:30:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:40.426 06:30:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:40.426 06:30:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:40.426 06:30:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.426 06:30:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.426 06:30:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:40.426 06:30:51 -- bdev/nbd_common.sh@41 -- # break 00:05:40.426 06:30:51 -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.426 06:30:51 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.426 06:30:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.426 06:30:51 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.684 06:30:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:40.684 06:30:51 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:40.684 06:30:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.684 06:30:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:40.684 06:30:51 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:40.684 06:30:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.684 06:30:51 -- bdev/nbd_common.sh@65 -- # true 00:05:40.684 06:30:51 -- bdev/nbd_common.sh@65 -- # count=0 00:05:40.684 06:30:51 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:40.684 06:30:51 -- bdev/nbd_common.sh@104 -- # count=0 00:05:40.684 06:30:51 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:40.684 06:30:51 -- bdev/nbd_common.sh@109 -- # return 0 00:05:40.684 06:30:51 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:40.942 06:30:51 -- event/event.sh@35 -- # sleep 3 00:05:40.942 [2024-11-28 06:30:51.695904] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.200 [2024-11-28 06:30:51.733200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.200 [2024-11-28 06:30:51.733204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.200 [2024-11-28 06:30:51.771943] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:41.200 [2024-11-28 06:30:51.771999] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:44.483 06:30:54 -- event/event.sh@23 -- # for i in {0..2} 00:05:44.483 spdk_app_start Round 1 00:05:44.483 06:30:54 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:44.483 06:30:54 -- event/event.sh@25 -- # waitforlisten 69072 /var/tmp/spdk-nbd.sock 00:05:44.483 06:30:54 -- common/autotest_common.sh@829 -- # '[' -z 69072 ']' 00:05:44.483 06:30:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:44.483 06:30:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:44.483 06:30:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:44.483 06:30:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.483 06:30:54 -- common/autotest_common.sh@10 -- # set +x 00:05:44.483 06:30:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.483 06:30:54 -- common/autotest_common.sh@862 -- # return 0 00:05:44.483 06:30:54 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.483 Malloc0 00:05:44.483 06:30:54 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.483 Malloc1 00:05:44.483 06:30:55 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.483 06:30:55 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.483 06:30:55 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.483 06:30:55 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:44.483 06:30:55 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.483 06:30:55 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:44.483 06:30:55 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.483 06:30:55 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.483 06:30:55 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.483 06:30:55 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:44.483 06:30:55 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.483 06:30:55 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:44.483 06:30:55 -- bdev/nbd_common.sh@12 -- # local i 00:05:44.483 06:30:55 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:44.483 06:30:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.483 06:30:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:44.740 /dev/nbd0 00:05:44.740 06:30:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:44.740 06:30:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:44.740 06:30:55 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:44.740 06:30:55 -- common/autotest_common.sh@867 -- # local i 00:05:44.740 06:30:55 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:44.740 06:30:55 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:44.740 06:30:55 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:44.740 06:30:55 -- common/autotest_common.sh@871 -- # break 00:05:44.740 06:30:55 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:44.740 06:30:55 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:44.740 06:30:55 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.740 1+0 records in 00:05:44.740 1+0 records out 00:05:44.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246896 s, 16.6 MB/s 00:05:44.740 06:30:55 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.740 06:30:55 -- common/autotest_common.sh@884 -- # size=4096 00:05:44.740 06:30:55 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.741 06:30:55 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:44.741 06:30:55 -- common/autotest_common.sh@887 -- # return 0 00:05:44.741 06:30:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.741 06:30:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.741 06:30:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:44.998 /dev/nbd1 00:05:44.998 06:30:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:44.998 06:30:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:44.998 06:30:55 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:44.998 06:30:55 -- common/autotest_common.sh@867 -- # local i 00:05:44.998 06:30:55 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:44.998 06:30:55 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:44.998 06:30:55 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:44.998 06:30:55 -- common/autotest_common.sh@871 -- # break 00:05:44.998 06:30:55 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:44.998 06:30:55 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:44.998 06:30:55 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.998 1+0 records in 00:05:44.998 1+0 records out 00:05:44.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205204 s, 20.0 MB/s 00:05:44.998 06:30:55 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.998 06:30:55 -- common/autotest_common.sh@884 -- # size=4096 00:05:44.998 06:30:55 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.998 06:30:55 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:44.998 06:30:55 -- common/autotest_common.sh@887 -- # return 0 00:05:44.998 06:30:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.998 06:30:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.998 06:30:55 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.998 06:30:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.998 06:30:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:45.256 { 00:05:45.256 "nbd_device": "/dev/nbd0", 00:05:45.256 "bdev_name": "Malloc0" 00:05:45.256 }, 00:05:45.256 { 00:05:45.256 "nbd_device": "/dev/nbd1", 00:05:45.256 "bdev_name": "Malloc1" 00:05:45.256 } 00:05:45.256 ]' 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:45.256 { 00:05:45.256 "nbd_device": "/dev/nbd0", 00:05:45.256 "bdev_name": "Malloc0" 00:05:45.256 }, 00:05:45.256 { 00:05:45.256 "nbd_device": "/dev/nbd1", 00:05:45.256 "bdev_name": "Malloc1" 00:05:45.256 } 00:05:45.256 ]' 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:45.256 /dev/nbd1' 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:45.256 /dev/nbd1' 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@65 -- # count=2 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@95 -- # count=2 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:45.256 256+0 records in 00:05:45.256 256+0 records out 00:05:45.256 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00719123 s, 146 MB/s 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:45.256 256+0 records in 00:05:45.256 256+0 records out 00:05:45.256 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152748 s, 68.6 MB/s 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:45.256 256+0 records in 00:05:45.256 256+0 records out 00:05:45.256 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01815 s, 57.8 MB/s 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@51 -- # local i 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.256 06:30:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:45.514 06:30:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:45.514 06:30:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:45.514 06:30:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:45.514 06:30:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.514 06:30:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.514 06:30:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:45.514 06:30:56 -- bdev/nbd_common.sh@41 -- # break 00:05:45.514 06:30:56 -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.514 06:30:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.514 06:30:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:45.771 06:30:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:45.771 06:30:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:45.771 06:30:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:45.771 06:30:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.771 06:30:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.771 06:30:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:45.771 06:30:56 -- bdev/nbd_common.sh@41 -- # break 00:05:45.771 06:30:56 -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.771 06:30:56 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.771 06:30:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.771 06:30:56 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.771 06:30:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:45.771 06:30:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.771 06:30:56 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:46.030 06:30:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:46.030 06:30:56 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:46.030 06:30:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.030 06:30:56 -- bdev/nbd_common.sh@65 -- # true 00:05:46.030 06:30:56 -- bdev/nbd_common.sh@65 -- # count=0 00:05:46.030 06:30:56 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:46.030 06:30:56 -- bdev/nbd_common.sh@104 -- # count=0 00:05:46.030 06:30:56 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:46.030 06:30:56 -- bdev/nbd_common.sh@109 -- # return 0 00:05:46.030 06:30:56 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:46.030 06:30:56 -- event/event.sh@35 -- # sleep 3 00:05:46.288 [2024-11-28 06:30:56.883491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.288 [2024-11-28 06:30:56.921725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.288 [2024-11-28 06:30:56.921747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.288 [2024-11-28 06:30:56.960330] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:46.288 [2024-11-28 06:30:56.960401] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:49.567 spdk_app_start Round 2 00:05:49.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:49.567 06:30:59 -- event/event.sh@23 -- # for i in {0..2} 00:05:49.567 06:30:59 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:49.567 06:30:59 -- event/event.sh@25 -- # waitforlisten 69072 /var/tmp/spdk-nbd.sock 00:05:49.567 06:30:59 -- common/autotest_common.sh@829 -- # '[' -z 69072 ']' 00:05:49.567 06:30:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:49.567 06:30:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.567 06:30:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:49.567 06:30:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.567 06:30:59 -- common/autotest_common.sh@10 -- # set +x 00:05:49.567 06:30:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.567 06:30:59 -- common/autotest_common.sh@862 -- # return 0 00:05:49.567 06:30:59 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.567 Malloc0 00:05:49.567 06:31:00 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.825 Malloc1 00:05:49.825 06:31:00 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.825 06:31:00 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.825 06:31:00 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.825 06:31:00 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:49.825 06:31:00 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.825 06:31:00 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:49.825 06:31:00 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.825 06:31:00 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.825 06:31:00 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.825 06:31:00 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:49.825 06:31:00 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.825 06:31:00 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:49.825 06:31:00 -- bdev/nbd_common.sh@12 -- # local i 00:05:49.825 06:31:00 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:49.825 06:31:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.825 06:31:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:49.825 /dev/nbd0 00:05:49.825 06:31:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:49.825 06:31:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:49.825 06:31:00 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:49.825 06:31:00 -- common/autotest_common.sh@867 -- # local i 00:05:49.825 06:31:00 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:49.825 06:31:00 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:49.825 06:31:00 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:49.825 06:31:00 -- common/autotest_common.sh@871 -- # break 00:05:49.825 06:31:00 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:49.825 06:31:00 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:49.825 06:31:00 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.825 1+0 records in 00:05:49.825 1+0 records out 00:05:49.825 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000149155 s, 27.5 MB/s 00:05:49.825 06:31:00 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.825 06:31:00 -- common/autotest_common.sh@884 -- # size=4096 00:05:49.825 06:31:00 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.825 06:31:00 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:49.825 06:31:00 -- common/autotest_common.sh@887 -- # return 0 00:05:49.825 06:31:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.825 06:31:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.825 06:31:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:50.083 /dev/nbd1 00:05:50.083 06:31:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:50.083 06:31:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:50.083 06:31:00 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:50.083 06:31:00 -- common/autotest_common.sh@867 -- # local i 00:05:50.083 06:31:00 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:50.083 06:31:00 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:50.083 06:31:00 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:50.083 06:31:00 -- common/autotest_common.sh@871 -- # break 00:05:50.083 06:31:00 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:50.083 06:31:00 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:50.083 06:31:00 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.083 1+0 records in 00:05:50.083 1+0 records out 00:05:50.083 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207341 s, 19.8 MB/s 00:05:50.083 06:31:00 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.083 06:31:00 -- common/autotest_common.sh@884 -- # size=4096 00:05:50.083 06:31:00 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.083 06:31:00 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:50.083 06:31:00 -- common/autotest_common.sh@887 -- # return 0 00:05:50.083 06:31:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.083 06:31:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.083 06:31:00 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.083 06:31:00 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.083 06:31:00 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.341 06:31:00 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:50.341 { 00:05:50.341 "nbd_device": "/dev/nbd0", 00:05:50.341 "bdev_name": "Malloc0" 00:05:50.341 }, 00:05:50.341 { 00:05:50.341 "nbd_device": "/dev/nbd1", 00:05:50.341 "bdev_name": "Malloc1" 00:05:50.341 } 00:05:50.341 ]' 00:05:50.341 06:31:00 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.341 06:31:00 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:50.341 { 00:05:50.341 "nbd_device": "/dev/nbd0", 00:05:50.341 "bdev_name": "Malloc0" 00:05:50.341 }, 00:05:50.341 { 00:05:50.341 "nbd_device": "/dev/nbd1", 00:05:50.341 "bdev_name": "Malloc1" 00:05:50.341 } 00:05:50.341 ]' 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:50.341 /dev/nbd1' 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:50.341 /dev/nbd1' 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@65 -- # count=2 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@95 -- # count=2 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:50.341 256+0 records in 00:05:50.341 256+0 records out 00:05:50.341 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010764 s, 97.4 MB/s 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:50.341 256+0 records in 00:05:50.341 256+0 records out 00:05:50.341 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0172383 s, 60.8 MB/s 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:50.341 256+0 records in 00:05:50.341 256+0 records out 00:05:50.341 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0176304 s, 59.5 MB/s 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@51 -- # local i 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.341 06:31:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:50.599 06:31:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:50.599 06:31:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:50.599 06:31:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:50.599 06:31:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.599 06:31:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.599 06:31:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:50.599 06:31:01 -- bdev/nbd_common.sh@41 -- # break 00:05:50.599 06:31:01 -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.599 06:31:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.599 06:31:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:50.856 06:31:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:50.856 06:31:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:50.856 06:31:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:50.856 06:31:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.856 06:31:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.856 06:31:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:50.856 06:31:01 -- bdev/nbd_common.sh@41 -- # break 00:05:50.856 06:31:01 -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.856 06:31:01 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.856 06:31:01 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.856 06:31:01 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.114 06:31:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:51.114 06:31:01 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:51.114 06:31:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.114 06:31:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:51.114 06:31:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.114 06:31:01 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:51.114 06:31:01 -- bdev/nbd_common.sh@65 -- # true 00:05:51.114 06:31:01 -- bdev/nbd_common.sh@65 -- # count=0 00:05:51.114 06:31:01 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:51.114 06:31:01 -- bdev/nbd_common.sh@104 -- # count=0 00:05:51.114 06:31:01 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:51.114 06:31:01 -- bdev/nbd_common.sh@109 -- # return 0 00:05:51.114 06:31:01 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:51.371 06:31:01 -- event/event.sh@35 -- # sleep 3 00:05:51.371 [2024-11-28 06:31:02.062119] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.371 [2024-11-28 06:31:02.100672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.371 [2024-11-28 06:31:02.100790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.371 [2024-11-28 06:31:02.139390] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:51.371 [2024-11-28 06:31:02.139444] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:54.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.674 06:31:04 -- event/event.sh@38 -- # waitforlisten 69072 /var/tmp/spdk-nbd.sock 00:05:54.674 06:31:04 -- common/autotest_common.sh@829 -- # '[' -z 69072 ']' 00:05:54.674 06:31:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.674 06:31:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.674 06:31:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.674 06:31:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.674 06:31:04 -- common/autotest_common.sh@10 -- # set +x 00:05:54.674 06:31:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.674 06:31:05 -- common/autotest_common.sh@862 -- # return 0 00:05:54.674 06:31:05 -- event/event.sh@39 -- # killprocess 69072 00:05:54.674 06:31:05 -- common/autotest_common.sh@936 -- # '[' -z 69072 ']' 00:05:54.674 06:31:05 -- common/autotest_common.sh@940 -- # kill -0 69072 00:05:54.674 06:31:05 -- common/autotest_common.sh@941 -- # uname 00:05:54.674 06:31:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:54.674 06:31:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69072 00:05:54.674 killing process with pid 69072 00:05:54.674 06:31:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:54.674 06:31:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:54.674 06:31:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69072' 00:05:54.674 06:31:05 -- common/autotest_common.sh@955 -- # kill 69072 00:05:54.674 06:31:05 -- common/autotest_common.sh@960 -- # wait 69072 00:05:54.674 spdk_app_start is called in Round 0. 00:05:54.674 Shutdown signal received, stop current app iteration 00:05:54.674 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:05:54.674 spdk_app_start is called in Round 1. 00:05:54.674 Shutdown signal received, stop current app iteration 00:05:54.674 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:05:54.674 spdk_app_start is called in Round 2. 00:05:54.674 Shutdown signal received, stop current app iteration 00:05:54.674 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:05:54.674 spdk_app_start is called in Round 3. 00:05:54.674 Shutdown signal received, stop current app iteration 00:05:54.674 06:31:05 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:54.674 06:31:05 -- event/event.sh@42 -- # return 0 00:05:54.674 00:05:54.674 real 0m16.611s 00:05:54.674 user 0m36.710s 00:05:54.674 sys 0m2.047s 00:05:54.674 06:31:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:54.674 06:31:05 -- common/autotest_common.sh@10 -- # set +x 00:05:54.674 ************************************ 00:05:54.674 END TEST app_repeat 00:05:54.674 ************************************ 00:05:54.674 06:31:05 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:54.674 06:31:05 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:54.674 06:31:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.674 06:31:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.674 06:31:05 -- common/autotest_common.sh@10 -- # set +x 00:05:54.674 ************************************ 00:05:54.674 START TEST cpu_locks 00:05:54.674 ************************************ 00:05:54.674 06:31:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:54.674 * Looking for test storage... 00:05:54.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:54.674 06:31:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:54.674 06:31:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:54.674 06:31:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:54.932 06:31:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:54.932 06:31:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:54.932 06:31:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:54.932 06:31:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:54.932 06:31:05 -- scripts/common.sh@335 -- # IFS=.-: 00:05:54.932 06:31:05 -- scripts/common.sh@335 -- # read -ra ver1 00:05:54.932 06:31:05 -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.932 06:31:05 -- scripts/common.sh@336 -- # read -ra ver2 00:05:54.932 06:31:05 -- scripts/common.sh@337 -- # local 'op=<' 00:05:54.932 06:31:05 -- scripts/common.sh@339 -- # ver1_l=2 00:05:54.932 06:31:05 -- scripts/common.sh@340 -- # ver2_l=1 00:05:54.932 06:31:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:54.932 06:31:05 -- scripts/common.sh@343 -- # case "$op" in 00:05:54.932 06:31:05 -- scripts/common.sh@344 -- # : 1 00:05:54.932 06:31:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:54.932 06:31:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.932 06:31:05 -- scripts/common.sh@364 -- # decimal 1 00:05:54.932 06:31:05 -- scripts/common.sh@352 -- # local d=1 00:05:54.932 06:31:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.932 06:31:05 -- scripts/common.sh@354 -- # echo 1 00:05:54.932 06:31:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:54.932 06:31:05 -- scripts/common.sh@365 -- # decimal 2 00:05:54.932 06:31:05 -- scripts/common.sh@352 -- # local d=2 00:05:54.932 06:31:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.932 06:31:05 -- scripts/common.sh@354 -- # echo 2 00:05:54.932 06:31:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:54.932 06:31:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:54.932 06:31:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:54.932 06:31:05 -- scripts/common.sh@367 -- # return 0 00:05:54.932 06:31:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.932 06:31:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:54.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.932 --rc genhtml_branch_coverage=1 00:05:54.932 --rc genhtml_function_coverage=1 00:05:54.932 --rc genhtml_legend=1 00:05:54.932 --rc geninfo_all_blocks=1 00:05:54.932 --rc geninfo_unexecuted_blocks=1 00:05:54.932 00:05:54.932 ' 00:05:54.932 06:31:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:54.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.932 --rc genhtml_branch_coverage=1 00:05:54.932 --rc genhtml_function_coverage=1 00:05:54.932 --rc genhtml_legend=1 00:05:54.932 --rc geninfo_all_blocks=1 00:05:54.932 --rc geninfo_unexecuted_blocks=1 00:05:54.932 00:05:54.932 ' 00:05:54.932 06:31:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:54.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.933 --rc genhtml_branch_coverage=1 00:05:54.933 --rc genhtml_function_coverage=1 00:05:54.933 --rc genhtml_legend=1 00:05:54.933 --rc geninfo_all_blocks=1 00:05:54.933 --rc geninfo_unexecuted_blocks=1 00:05:54.933 00:05:54.933 ' 00:05:54.933 06:31:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:54.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.933 --rc genhtml_branch_coverage=1 00:05:54.933 --rc genhtml_function_coverage=1 00:05:54.933 --rc genhtml_legend=1 00:05:54.933 --rc geninfo_all_blocks=1 00:05:54.933 --rc geninfo_unexecuted_blocks=1 00:05:54.933 00:05:54.933 ' 00:05:54.933 06:31:05 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:54.933 06:31:05 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:54.933 06:31:05 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:54.933 06:31:05 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:54.933 06:31:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.933 06:31:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.933 06:31:05 -- common/autotest_common.sh@10 -- # set +x 00:05:54.933 ************************************ 00:05:54.933 START TEST default_locks 00:05:54.933 ************************************ 00:05:54.933 06:31:05 -- common/autotest_common.sh@1114 -- # default_locks 00:05:54.933 06:31:05 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=69496 00:05:54.933 06:31:05 -- event/cpu_locks.sh@47 -- # waitforlisten 69496 00:05:54.933 06:31:05 -- common/autotest_common.sh@829 -- # '[' -z 69496 ']' 00:05:54.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.933 06:31:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.933 06:31:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.933 06:31:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.933 06:31:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.933 06:31:05 -- common/autotest_common.sh@10 -- # set +x 00:05:54.933 06:31:05 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.933 [2024-11-28 06:31:05.564902] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:54.933 [2024-11-28 06:31:05.565021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69496 ] 00:05:54.933 [2024-11-28 06:31:05.695561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.191 [2024-11-28 06:31:05.741819] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:55.191 [2024-11-28 06:31:05.741998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.758 06:31:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.758 06:31:06 -- common/autotest_common.sh@862 -- # return 0 00:05:55.758 06:31:06 -- event/cpu_locks.sh@49 -- # locks_exist 69496 00:05:55.758 06:31:06 -- event/cpu_locks.sh@22 -- # lslocks -p 69496 00:05:55.758 06:31:06 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.017 06:31:06 -- event/cpu_locks.sh@50 -- # killprocess 69496 00:05:56.017 06:31:06 -- common/autotest_common.sh@936 -- # '[' -z 69496 ']' 00:05:56.017 06:31:06 -- common/autotest_common.sh@940 -- # kill -0 69496 00:05:56.017 06:31:06 -- common/autotest_common.sh@941 -- # uname 00:05:56.017 06:31:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:56.017 06:31:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69496 00:05:56.017 06:31:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:56.017 killing process with pid 69496 00:05:56.017 06:31:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:56.017 06:31:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69496' 00:05:56.017 06:31:06 -- common/autotest_common.sh@955 -- # kill 69496 00:05:56.017 06:31:06 -- common/autotest_common.sh@960 -- # wait 69496 00:05:56.276 06:31:06 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 69496 00:05:56.276 06:31:06 -- common/autotest_common.sh@650 -- # local es=0 00:05:56.276 06:31:06 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69496 00:05:56.276 06:31:06 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:56.276 06:31:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.276 06:31:06 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:56.276 06:31:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.276 06:31:06 -- common/autotest_common.sh@653 -- # waitforlisten 69496 00:05:56.276 06:31:06 -- common/autotest_common.sh@829 -- # '[' -z 69496 ']' 00:05:56.276 06:31:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.276 06:31:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.276 06:31:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.276 06:31:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.276 06:31:06 -- common/autotest_common.sh@10 -- # set +x 00:05:56.276 ERROR: process (pid: 69496) is no longer running 00:05:56.276 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69496) - No such process 00:05:56.276 06:31:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.276 06:31:06 -- common/autotest_common.sh@862 -- # return 1 00:05:56.276 06:31:06 -- common/autotest_common.sh@653 -- # es=1 00:05:56.276 06:31:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:56.276 06:31:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:56.276 06:31:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:56.276 06:31:06 -- event/cpu_locks.sh@54 -- # no_locks 00:05:56.276 06:31:06 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:56.276 06:31:06 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:56.276 06:31:06 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:56.276 00:05:56.276 real 0m1.388s 00:05:56.276 user 0m1.382s 00:05:56.276 sys 0m0.402s 00:05:56.276 06:31:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.277 06:31:06 -- common/autotest_common.sh@10 -- # set +x 00:05:56.277 ************************************ 00:05:56.277 END TEST default_locks 00:05:56.277 ************************************ 00:05:56.277 06:31:06 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:56.277 06:31:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:56.277 06:31:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.277 06:31:06 -- common/autotest_common.sh@10 -- # set +x 00:05:56.277 ************************************ 00:05:56.277 START TEST default_locks_via_rpc 00:05:56.277 ************************************ 00:05:56.277 06:31:06 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:05:56.277 06:31:06 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=69538 00:05:56.277 06:31:06 -- event/cpu_locks.sh@63 -- # waitforlisten 69538 00:05:56.277 06:31:06 -- common/autotest_common.sh@829 -- # '[' -z 69538 ']' 00:05:56.277 06:31:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.277 06:31:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.277 06:31:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.277 06:31:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.277 06:31:06 -- common/autotest_common.sh@10 -- # set +x 00:05:56.277 06:31:06 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.277 [2024-11-28 06:31:06.998623] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:56.277 [2024-11-28 06:31:06.998756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69538 ] 00:05:56.535 [2024-11-28 06:31:07.131585] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.535 [2024-11-28 06:31:07.170618] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:56.535 [2024-11-28 06:31:07.170829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.100 06:31:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.100 06:31:07 -- common/autotest_common.sh@862 -- # return 0 00:05:57.100 06:31:07 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:57.100 06:31:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.100 06:31:07 -- common/autotest_common.sh@10 -- # set +x 00:05:57.100 06:31:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.100 06:31:07 -- event/cpu_locks.sh@67 -- # no_locks 00:05:57.100 06:31:07 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:57.100 06:31:07 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:57.100 06:31:07 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:57.100 06:31:07 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:57.100 06:31:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.100 06:31:07 -- common/autotest_common.sh@10 -- # set +x 00:05:57.100 06:31:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.100 06:31:07 -- event/cpu_locks.sh@71 -- # locks_exist 69538 00:05:57.100 06:31:07 -- event/cpu_locks.sh@22 -- # lslocks -p 69538 00:05:57.100 06:31:07 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.357 06:31:07 -- event/cpu_locks.sh@73 -- # killprocess 69538 00:05:57.357 06:31:07 -- common/autotest_common.sh@936 -- # '[' -z 69538 ']' 00:05:57.357 06:31:07 -- common/autotest_common.sh@940 -- # kill -0 69538 00:05:57.357 06:31:07 -- common/autotest_common.sh@941 -- # uname 00:05:57.357 06:31:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:57.357 06:31:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69538 00:05:57.357 06:31:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:57.357 06:31:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:57.357 06:31:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69538' 00:05:57.357 killing process with pid 69538 00:05:57.357 06:31:08 -- common/autotest_common.sh@955 -- # kill 69538 00:05:57.357 06:31:08 -- common/autotest_common.sh@960 -- # wait 69538 00:05:57.615 00:05:57.615 real 0m1.385s 00:05:57.615 user 0m1.386s 00:05:57.615 sys 0m0.412s 00:05:57.615 ************************************ 00:05:57.615 END TEST default_locks_via_rpc 00:05:57.615 06:31:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:57.615 06:31:08 -- common/autotest_common.sh@10 -- # set +x 00:05:57.615 ************************************ 00:05:57.615 06:31:08 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:57.615 06:31:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:57.615 06:31:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.615 06:31:08 -- common/autotest_common.sh@10 -- # set +x 00:05:57.615 ************************************ 00:05:57.615 START TEST non_locking_app_on_locked_coremask 00:05:57.615 ************************************ 00:05:57.615 06:31:08 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:05:57.615 06:31:08 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=69585 00:05:57.615 06:31:08 -- event/cpu_locks.sh@81 -- # waitforlisten 69585 /var/tmp/spdk.sock 00:05:57.615 06:31:08 -- common/autotest_common.sh@829 -- # '[' -z 69585 ']' 00:05:57.615 06:31:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.615 06:31:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.615 06:31:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.615 06:31:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.615 06:31:08 -- common/autotest_common.sh@10 -- # set +x 00:05:57.615 06:31:08 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.872 [2024-11-28 06:31:08.421034] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:57.873 [2024-11-28 06:31:08.421151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69585 ] 00:05:57.873 [2024-11-28 06:31:08.554720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.873 [2024-11-28 06:31:08.593355] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:57.873 [2024-11-28 06:31:08.593534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.805 06:31:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.805 06:31:09 -- common/autotest_common.sh@862 -- # return 0 00:05:58.805 06:31:09 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=69595 00:05:58.805 06:31:09 -- event/cpu_locks.sh@85 -- # waitforlisten 69595 /var/tmp/spdk2.sock 00:05:58.805 06:31:09 -- common/autotest_common.sh@829 -- # '[' -z 69595 ']' 00:05:58.805 06:31:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.805 06:31:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.805 06:31:09 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:58.805 06:31:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.805 06:31:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.805 06:31:09 -- common/autotest_common.sh@10 -- # set +x 00:05:58.805 [2024-11-28 06:31:09.305796] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:58.805 [2024-11-28 06:31:09.305908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69595 ] 00:05:58.805 [2024-11-28 06:31:09.438080] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.805 [2024-11-28 06:31:09.438161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.805 [2024-11-28 06:31:09.532768] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:58.805 [2024-11-28 06:31:09.532974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.371 06:31:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.371 06:31:10 -- common/autotest_common.sh@862 -- # return 0 00:05:59.371 06:31:10 -- event/cpu_locks.sh@87 -- # locks_exist 69585 00:05:59.371 06:31:10 -- event/cpu_locks.sh@22 -- # lslocks -p 69585 00:05:59.371 06:31:10 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.629 06:31:10 -- event/cpu_locks.sh@89 -- # killprocess 69585 00:05:59.629 06:31:10 -- common/autotest_common.sh@936 -- # '[' -z 69585 ']' 00:05:59.629 06:31:10 -- common/autotest_common.sh@940 -- # kill -0 69585 00:05:59.629 06:31:10 -- common/autotest_common.sh@941 -- # uname 00:05:59.629 06:31:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:59.629 06:31:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69585 00:05:59.887 06:31:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:59.887 06:31:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:59.887 killing process with pid 69585 00:05:59.887 06:31:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69585' 00:05:59.887 06:31:10 -- common/autotest_common.sh@955 -- # kill 69585 00:05:59.887 06:31:10 -- common/autotest_common.sh@960 -- # wait 69585 00:06:00.452 06:31:10 -- event/cpu_locks.sh@90 -- # killprocess 69595 00:06:00.452 06:31:10 -- common/autotest_common.sh@936 -- # '[' -z 69595 ']' 00:06:00.452 06:31:10 -- common/autotest_common.sh@940 -- # kill -0 69595 00:06:00.452 06:31:10 -- common/autotest_common.sh@941 -- # uname 00:06:00.452 06:31:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:00.452 06:31:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69595 00:06:00.452 06:31:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:00.452 06:31:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:00.452 06:31:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69595' 00:06:00.452 killing process with pid 69595 00:06:00.452 06:31:11 -- common/autotest_common.sh@955 -- # kill 69595 00:06:00.452 06:31:11 -- common/autotest_common.sh@960 -- # wait 69595 00:06:00.709 00:06:00.709 real 0m2.937s 00:06:00.709 user 0m3.175s 00:06:00.709 sys 0m0.758s 00:06:00.709 06:31:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:00.709 06:31:11 -- common/autotest_common.sh@10 -- # set +x 00:06:00.709 ************************************ 00:06:00.709 END TEST non_locking_app_on_locked_coremask 00:06:00.709 ************************************ 00:06:00.709 06:31:11 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:00.709 06:31:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:00.709 06:31:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.709 06:31:11 -- common/autotest_common.sh@10 -- # set +x 00:06:00.709 ************************************ 00:06:00.709 START TEST locking_app_on_unlocked_coremask 00:06:00.709 ************************************ 00:06:00.709 06:31:11 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:00.709 06:31:11 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=69653 00:06:00.709 06:31:11 -- event/cpu_locks.sh@99 -- # waitforlisten 69653 /var/tmp/spdk.sock 00:06:00.709 06:31:11 -- common/autotest_common.sh@829 -- # '[' -z 69653 ']' 00:06:00.709 06:31:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.709 06:31:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.709 06:31:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.709 06:31:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.709 06:31:11 -- common/autotest_common.sh@10 -- # set +x 00:06:00.709 06:31:11 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:00.709 [2024-11-28 06:31:11.398659] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:00.709 [2024-11-28 06:31:11.398786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69653 ] 00:06:00.966 [2024-11-28 06:31:11.531215] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.966 [2024-11-28 06:31:11.531274] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.966 [2024-11-28 06:31:11.570221] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:00.966 [2024-11-28 06:31:11.570419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.532 06:31:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.532 06:31:12 -- common/autotest_common.sh@862 -- # return 0 00:06:01.532 06:31:12 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:01.532 06:31:12 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=69669 00:06:01.532 06:31:12 -- event/cpu_locks.sh@103 -- # waitforlisten 69669 /var/tmp/spdk2.sock 00:06:01.532 06:31:12 -- common/autotest_common.sh@829 -- # '[' -z 69669 ']' 00:06:01.532 06:31:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.532 06:31:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.532 06:31:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.532 06:31:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.532 06:31:12 -- common/autotest_common.sh@10 -- # set +x 00:06:01.532 [2024-11-28 06:31:12.278111] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:01.532 [2024-11-28 06:31:12.278236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69669 ] 00:06:01.791 [2024-11-28 06:31:12.413092] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.791 [2024-11-28 06:31:12.491928] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:01.791 [2024-11-28 06:31:12.492115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.359 06:31:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.359 06:31:13 -- common/autotest_common.sh@862 -- # return 0 00:06:02.359 06:31:13 -- event/cpu_locks.sh@105 -- # locks_exist 69669 00:06:02.359 06:31:13 -- event/cpu_locks.sh@22 -- # lslocks -p 69669 00:06:02.359 06:31:13 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:02.925 06:31:13 -- event/cpu_locks.sh@107 -- # killprocess 69653 00:06:02.925 06:31:13 -- common/autotest_common.sh@936 -- # '[' -z 69653 ']' 00:06:02.925 06:31:13 -- common/autotest_common.sh@940 -- # kill -0 69653 00:06:02.925 06:31:13 -- common/autotest_common.sh@941 -- # uname 00:06:02.925 06:31:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:02.925 06:31:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69653 00:06:02.925 06:31:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:02.925 06:31:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:02.925 killing process with pid 69653 00:06:02.925 06:31:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69653' 00:06:02.925 06:31:13 -- common/autotest_common.sh@955 -- # kill 69653 00:06:02.925 06:31:13 -- common/autotest_common.sh@960 -- # wait 69653 00:06:03.491 06:31:13 -- event/cpu_locks.sh@108 -- # killprocess 69669 00:06:03.491 06:31:13 -- common/autotest_common.sh@936 -- # '[' -z 69669 ']' 00:06:03.491 06:31:13 -- common/autotest_common.sh@940 -- # kill -0 69669 00:06:03.491 06:31:13 -- common/autotest_common.sh@941 -- # uname 00:06:03.491 06:31:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:03.491 06:31:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69669 00:06:03.491 06:31:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:03.491 killing process with pid 69669 00:06:03.491 06:31:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:03.491 06:31:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69669' 00:06:03.491 06:31:14 -- common/autotest_common.sh@955 -- # kill 69669 00:06:03.491 06:31:14 -- common/autotest_common.sh@960 -- # wait 69669 00:06:03.749 00:06:03.749 real 0m2.966s 00:06:03.749 user 0m3.180s 00:06:03.749 sys 0m0.782s 00:06:03.749 06:31:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:03.749 06:31:14 -- common/autotest_common.sh@10 -- # set +x 00:06:03.749 ************************************ 00:06:03.749 END TEST locking_app_on_unlocked_coremask 00:06:03.749 ************************************ 00:06:03.749 06:31:14 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:03.749 06:31:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:03.749 06:31:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.749 06:31:14 -- common/autotest_common.sh@10 -- # set +x 00:06:03.749 ************************************ 00:06:03.749 START TEST locking_app_on_locked_coremask 00:06:03.749 ************************************ 00:06:03.749 06:31:14 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:03.749 06:31:14 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=69727 00:06:03.749 06:31:14 -- event/cpu_locks.sh@116 -- # waitforlisten 69727 /var/tmp/spdk.sock 00:06:03.749 06:31:14 -- common/autotest_common.sh@829 -- # '[' -z 69727 ']' 00:06:03.749 06:31:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.749 06:31:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.749 06:31:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.749 06:31:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.749 06:31:14 -- common/autotest_common.sh@10 -- # set +x 00:06:03.749 06:31:14 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.749 [2024-11-28 06:31:14.399761] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:03.749 [2024-11-28 06:31:14.399870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69727 ] 00:06:04.008 [2024-11-28 06:31:14.534717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.008 [2024-11-28 06:31:14.573495] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:04.008 [2024-11-28 06:31:14.573673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.574 06:31:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.574 06:31:15 -- common/autotest_common.sh@862 -- # return 0 00:06:04.574 06:31:15 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=69743 00:06:04.574 06:31:15 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 69743 /var/tmp/spdk2.sock 00:06:04.574 06:31:15 -- common/autotest_common.sh@650 -- # local es=0 00:06:04.574 06:31:15 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69743 /var/tmp/spdk2.sock 00:06:04.574 06:31:15 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:04.574 06:31:15 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:04.574 06:31:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.574 06:31:15 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:04.574 06:31:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.574 06:31:15 -- common/autotest_common.sh@653 -- # waitforlisten 69743 /var/tmp/spdk2.sock 00:06:04.574 06:31:15 -- common/autotest_common.sh@829 -- # '[' -z 69743 ']' 00:06:04.574 06:31:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.574 06:31:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.574 06:31:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.574 06:31:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.574 06:31:15 -- common/autotest_common.sh@10 -- # set +x 00:06:04.574 [2024-11-28 06:31:15.279916] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:04.574 [2024-11-28 06:31:15.280027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69743 ] 00:06:04.833 [2024-11-28 06:31:15.413092] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 69727 has claimed it. 00:06:04.833 [2024-11-28 06:31:15.413171] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:05.399 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69743) - No such process 00:06:05.399 ERROR: process (pid: 69743) is no longer running 00:06:05.399 06:31:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.399 06:31:15 -- common/autotest_common.sh@862 -- # return 1 00:06:05.399 06:31:15 -- common/autotest_common.sh@653 -- # es=1 00:06:05.399 06:31:15 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:05.399 06:31:15 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:05.399 06:31:15 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:05.399 06:31:15 -- event/cpu_locks.sh@122 -- # locks_exist 69727 00:06:05.399 06:31:15 -- event/cpu_locks.sh@22 -- # lslocks -p 69727 00:06:05.399 06:31:15 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.399 06:31:16 -- event/cpu_locks.sh@124 -- # killprocess 69727 00:06:05.399 06:31:16 -- common/autotest_common.sh@936 -- # '[' -z 69727 ']' 00:06:05.399 06:31:16 -- common/autotest_common.sh@940 -- # kill -0 69727 00:06:05.399 06:31:16 -- common/autotest_common.sh@941 -- # uname 00:06:05.399 06:31:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:05.399 06:31:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69727 00:06:05.399 06:31:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:05.399 06:31:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:05.399 killing process with pid 69727 00:06:05.399 06:31:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69727' 00:06:05.399 06:31:16 -- common/autotest_common.sh@955 -- # kill 69727 00:06:05.399 06:31:16 -- common/autotest_common.sh@960 -- # wait 69727 00:06:05.657 00:06:05.657 real 0m2.073s 00:06:05.657 user 0m2.292s 00:06:05.657 sys 0m0.477s 00:06:05.657 06:31:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.657 06:31:16 -- common/autotest_common.sh@10 -- # set +x 00:06:05.657 ************************************ 00:06:05.657 END TEST locking_app_on_locked_coremask 00:06:05.657 ************************************ 00:06:05.915 06:31:16 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:05.915 06:31:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:05.915 06:31:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.915 06:31:16 -- common/autotest_common.sh@10 -- # set +x 00:06:05.915 ************************************ 00:06:05.915 START TEST locking_overlapped_coremask 00:06:05.915 ************************************ 00:06:05.915 06:31:16 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:05.915 06:31:16 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=69791 00:06:05.915 06:31:16 -- event/cpu_locks.sh@133 -- # waitforlisten 69791 /var/tmp/spdk.sock 00:06:05.915 06:31:16 -- common/autotest_common.sh@829 -- # '[' -z 69791 ']' 00:06:05.915 06:31:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.915 06:31:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.915 06:31:16 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:05.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.915 06:31:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.915 06:31:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.915 06:31:16 -- common/autotest_common.sh@10 -- # set +x 00:06:05.915 [2024-11-28 06:31:16.518278] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:05.915 [2024-11-28 06:31:16.518389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69791 ] 00:06:05.915 [2024-11-28 06:31:16.643788] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:05.915 [2024-11-28 06:31:16.683848] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:05.915 [2024-11-28 06:31:16.684167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.915 [2024-11-28 06:31:16.684408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.915 [2024-11-28 06:31:16.684438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.848 06:31:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.848 06:31:17 -- common/autotest_common.sh@862 -- # return 0 00:06:06.848 06:31:17 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=69803 00:06:06.848 06:31:17 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 69803 /var/tmp/spdk2.sock 00:06:06.849 06:31:17 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:06.849 06:31:17 -- common/autotest_common.sh@650 -- # local es=0 00:06:06.849 06:31:17 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69803 /var/tmp/spdk2.sock 00:06:06.849 06:31:17 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:06.849 06:31:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.849 06:31:17 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:06.849 06:31:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.849 06:31:17 -- common/autotest_common.sh@653 -- # waitforlisten 69803 /var/tmp/spdk2.sock 00:06:06.849 06:31:17 -- common/autotest_common.sh@829 -- # '[' -z 69803 ']' 00:06:06.849 06:31:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.849 06:31:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.849 06:31:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.849 06:31:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.849 06:31:17 -- common/autotest_common.sh@10 -- # set +x 00:06:06.849 [2024-11-28 06:31:17.371218] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:06.849 [2024-11-28 06:31:17.371663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69803 ] 00:06:06.849 [2024-11-28 06:31:17.516660] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69791 has claimed it. 00:06:06.849 [2024-11-28 06:31:17.516724] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:07.415 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69803) - No such process 00:06:07.415 ERROR: process (pid: 69803) is no longer running 00:06:07.415 06:31:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.415 06:31:17 -- common/autotest_common.sh@862 -- # return 1 00:06:07.415 06:31:17 -- common/autotest_common.sh@653 -- # es=1 00:06:07.415 06:31:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:07.415 06:31:17 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:07.415 06:31:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:07.415 06:31:17 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:07.415 06:31:17 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:07.415 06:31:17 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:07.415 06:31:17 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:07.415 06:31:17 -- event/cpu_locks.sh@141 -- # killprocess 69791 00:06:07.415 06:31:17 -- common/autotest_common.sh@936 -- # '[' -z 69791 ']' 00:06:07.415 06:31:17 -- common/autotest_common.sh@940 -- # kill -0 69791 00:06:07.415 06:31:17 -- common/autotest_common.sh@941 -- # uname 00:06:07.415 06:31:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:07.415 06:31:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69791 00:06:07.415 06:31:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:07.415 06:31:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:07.415 killing process with pid 69791 00:06:07.415 06:31:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69791' 00:06:07.415 06:31:18 -- common/autotest_common.sh@955 -- # kill 69791 00:06:07.415 06:31:18 -- common/autotest_common.sh@960 -- # wait 69791 00:06:07.674 00:06:07.674 real 0m1.870s 00:06:07.674 user 0m5.058s 00:06:07.674 sys 0m0.404s 00:06:07.674 06:31:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.674 06:31:18 -- common/autotest_common.sh@10 -- # set +x 00:06:07.674 ************************************ 00:06:07.674 END TEST locking_overlapped_coremask 00:06:07.674 ************************************ 00:06:07.674 06:31:18 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:07.674 06:31:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.674 06:31:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.674 06:31:18 -- common/autotest_common.sh@10 -- # set +x 00:06:07.674 ************************************ 00:06:07.674 START TEST locking_overlapped_coremask_via_rpc 00:06:07.674 ************************************ 00:06:07.674 06:31:18 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:06:07.674 06:31:18 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=69845 00:06:07.674 06:31:18 -- event/cpu_locks.sh@149 -- # waitforlisten 69845 /var/tmp/spdk.sock 00:06:07.674 06:31:18 -- common/autotest_common.sh@829 -- # '[' -z 69845 ']' 00:06:07.674 06:31:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.674 06:31:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.674 06:31:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.674 06:31:18 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:07.674 06:31:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.674 06:31:18 -- common/autotest_common.sh@10 -- # set +x 00:06:07.674 [2024-11-28 06:31:18.433445] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:07.674 [2024-11-28 06:31:18.433565] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69845 ] 00:06:07.933 [2024-11-28 06:31:18.566733] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:07.933 [2024-11-28 06:31:18.566794] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.933 [2024-11-28 06:31:18.606870] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:07.933 [2024-11-28 06:31:18.607388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.933 [2024-11-28 06:31:18.607670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.933 [2024-11-28 06:31:18.607764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.499 06:31:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.499 06:31:19 -- common/autotest_common.sh@862 -- # return 0 00:06:08.499 06:31:19 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=69863 00:06:08.499 06:31:19 -- event/cpu_locks.sh@153 -- # waitforlisten 69863 /var/tmp/spdk2.sock 00:06:08.499 06:31:19 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:08.499 06:31:19 -- common/autotest_common.sh@829 -- # '[' -z 69863 ']' 00:06:08.499 06:31:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.499 06:31:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.499 06:31:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.499 06:31:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.499 06:31:19 -- common/autotest_common.sh@10 -- # set +x 00:06:08.757 [2024-11-28 06:31:19.316431] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:08.757 [2024-11-28 06:31:19.316532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69863 ] 00:06:08.757 [2024-11-28 06:31:19.457643] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:08.757 [2024-11-28 06:31:19.457689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.757 [2024-11-28 06:31:19.523144] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:08.757 [2024-11-28 06:31:19.523795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.015 [2024-11-28 06:31:19.526763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.015 [2024-11-28 06:31:19.526824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:09.604 06:31:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.604 06:31:20 -- common/autotest_common.sh@862 -- # return 0 00:06:09.604 06:31:20 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:09.604 06:31:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.604 06:31:20 -- common/autotest_common.sh@10 -- # set +x 00:06:09.604 06:31:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.604 06:31:20 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:09.604 06:31:20 -- common/autotest_common.sh@650 -- # local es=0 00:06:09.604 06:31:20 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:09.604 06:31:20 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:09.604 06:31:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.604 06:31:20 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:09.604 06:31:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.604 06:31:20 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:09.604 06:31:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.604 06:31:20 -- common/autotest_common.sh@10 -- # set +x 00:06:09.604 [2024-11-28 06:31:20.163864] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69845 has claimed it. 00:06:09.604 request: 00:06:09.604 { 00:06:09.604 "method": "framework_enable_cpumask_locks", 00:06:09.604 "req_id": 1 00:06:09.604 } 00:06:09.604 Got JSON-RPC error response 00:06:09.604 response: 00:06:09.604 { 00:06:09.604 "code": -32603, 00:06:09.604 "message": "Failed to claim CPU core: 2" 00:06:09.604 } 00:06:09.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.604 06:31:20 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:09.604 06:31:20 -- common/autotest_common.sh@653 -- # es=1 00:06:09.604 06:31:20 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:09.604 06:31:20 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:09.604 06:31:20 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:09.604 06:31:20 -- event/cpu_locks.sh@158 -- # waitforlisten 69845 /var/tmp/spdk.sock 00:06:09.604 06:31:20 -- common/autotest_common.sh@829 -- # '[' -z 69845 ']' 00:06:09.604 06:31:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.604 06:31:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.605 06:31:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.605 06:31:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.605 06:31:20 -- common/autotest_common.sh@10 -- # set +x 00:06:09.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.861 06:31:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.861 06:31:20 -- common/autotest_common.sh@862 -- # return 0 00:06:09.861 06:31:20 -- event/cpu_locks.sh@159 -- # waitforlisten 69863 /var/tmp/spdk2.sock 00:06:09.861 06:31:20 -- common/autotest_common.sh@829 -- # '[' -z 69863 ']' 00:06:09.861 06:31:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.861 06:31:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.861 06:31:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.861 06:31:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.861 06:31:20 -- common/autotest_common.sh@10 -- # set +x 00:06:09.861 ************************************ 00:06:09.861 END TEST locking_overlapped_coremask_via_rpc 00:06:09.861 ************************************ 00:06:09.861 06:31:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.861 06:31:20 -- common/autotest_common.sh@862 -- # return 0 00:06:09.861 06:31:20 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:09.861 06:31:20 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:09.861 06:31:20 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:09.862 06:31:20 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:09.862 00:06:09.862 real 0m2.194s 00:06:09.862 user 0m1.015s 00:06:09.862 sys 0m0.108s 00:06:09.862 06:31:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:09.862 06:31:20 -- common/autotest_common.sh@10 -- # set +x 00:06:09.862 06:31:20 -- event/cpu_locks.sh@174 -- # cleanup 00:06:09.862 06:31:20 -- event/cpu_locks.sh@15 -- # [[ -z 69845 ]] 00:06:09.862 06:31:20 -- event/cpu_locks.sh@15 -- # killprocess 69845 00:06:09.862 06:31:20 -- common/autotest_common.sh@936 -- # '[' -z 69845 ']' 00:06:09.862 06:31:20 -- common/autotest_common.sh@940 -- # kill -0 69845 00:06:09.862 06:31:20 -- common/autotest_common.sh@941 -- # uname 00:06:09.862 06:31:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:09.862 06:31:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69845 00:06:09.862 killing process with pid 69845 00:06:09.862 06:31:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:09.862 06:31:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:09.862 06:31:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69845' 00:06:09.862 06:31:20 -- common/autotest_common.sh@955 -- # kill 69845 00:06:09.862 06:31:20 -- common/autotest_common.sh@960 -- # wait 69845 00:06:10.426 06:31:20 -- event/cpu_locks.sh@16 -- # [[ -z 69863 ]] 00:06:10.426 06:31:20 -- event/cpu_locks.sh@16 -- # killprocess 69863 00:06:10.426 06:31:20 -- common/autotest_common.sh@936 -- # '[' -z 69863 ']' 00:06:10.426 06:31:20 -- common/autotest_common.sh@940 -- # kill -0 69863 00:06:10.426 06:31:20 -- common/autotest_common.sh@941 -- # uname 00:06:10.426 06:31:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:10.426 06:31:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69863 00:06:10.426 killing process with pid 69863 00:06:10.426 06:31:20 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:10.426 06:31:20 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:10.426 06:31:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69863' 00:06:10.426 06:31:20 -- common/autotest_common.sh@955 -- # kill 69863 00:06:10.426 06:31:20 -- common/autotest_common.sh@960 -- # wait 69863 00:06:10.426 06:31:21 -- event/cpu_locks.sh@18 -- # rm -f 00:06:10.426 06:31:21 -- event/cpu_locks.sh@1 -- # cleanup 00:06:10.426 06:31:21 -- event/cpu_locks.sh@15 -- # [[ -z 69845 ]] 00:06:10.426 06:31:21 -- event/cpu_locks.sh@15 -- # killprocess 69845 00:06:10.426 06:31:21 -- common/autotest_common.sh@936 -- # '[' -z 69845 ']' 00:06:10.426 06:31:21 -- common/autotest_common.sh@940 -- # kill -0 69845 00:06:10.426 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (69845) - No such process 00:06:10.426 Process with pid 69845 is not found 00:06:10.426 06:31:21 -- common/autotest_common.sh@963 -- # echo 'Process with pid 69845 is not found' 00:06:10.426 06:31:21 -- event/cpu_locks.sh@16 -- # [[ -z 69863 ]] 00:06:10.426 06:31:21 -- event/cpu_locks.sh@16 -- # killprocess 69863 00:06:10.426 06:31:21 -- common/autotest_common.sh@936 -- # '[' -z 69863 ']' 00:06:10.426 06:31:21 -- common/autotest_common.sh@940 -- # kill -0 69863 00:06:10.426 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (69863) - No such process 00:06:10.426 Process with pid 69863 is not found 00:06:10.427 06:31:21 -- common/autotest_common.sh@963 -- # echo 'Process with pid 69863 is not found' 00:06:10.427 06:31:21 -- event/cpu_locks.sh@18 -- # rm -f 00:06:10.427 00:06:10.427 real 0m15.815s 00:06:10.427 user 0m27.589s 00:06:10.427 sys 0m4.068s 00:06:10.427 06:31:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:10.427 ************************************ 00:06:10.427 END TEST cpu_locks 00:06:10.427 ************************************ 00:06:10.427 06:31:21 -- common/autotest_common.sh@10 -- # set +x 00:06:10.684 00:06:10.684 real 0m39.095s 00:06:10.684 user 1m15.247s 00:06:10.684 sys 0m6.932s 00:06:10.684 06:31:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:10.684 06:31:21 -- common/autotest_common.sh@10 -- # set +x 00:06:10.684 ************************************ 00:06:10.684 END TEST event 00:06:10.684 ************************************ 00:06:10.684 06:31:21 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:10.684 06:31:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:10.684 06:31:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.684 06:31:21 -- common/autotest_common.sh@10 -- # set +x 00:06:10.684 ************************************ 00:06:10.684 START TEST thread 00:06:10.684 ************************************ 00:06:10.684 06:31:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:10.684 * Looking for test storage... 00:06:10.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:10.684 06:31:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:10.684 06:31:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:10.684 06:31:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:10.684 06:31:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:10.684 06:31:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:10.684 06:31:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:10.684 06:31:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:10.684 06:31:21 -- scripts/common.sh@335 -- # IFS=.-: 00:06:10.684 06:31:21 -- scripts/common.sh@335 -- # read -ra ver1 00:06:10.684 06:31:21 -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.684 06:31:21 -- scripts/common.sh@336 -- # read -ra ver2 00:06:10.684 06:31:21 -- scripts/common.sh@337 -- # local 'op=<' 00:06:10.684 06:31:21 -- scripts/common.sh@339 -- # ver1_l=2 00:06:10.684 06:31:21 -- scripts/common.sh@340 -- # ver2_l=1 00:06:10.684 06:31:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:10.684 06:31:21 -- scripts/common.sh@343 -- # case "$op" in 00:06:10.684 06:31:21 -- scripts/common.sh@344 -- # : 1 00:06:10.684 06:31:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:10.684 06:31:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.684 06:31:21 -- scripts/common.sh@364 -- # decimal 1 00:06:10.684 06:31:21 -- scripts/common.sh@352 -- # local d=1 00:06:10.684 06:31:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.684 06:31:21 -- scripts/common.sh@354 -- # echo 1 00:06:10.684 06:31:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:10.684 06:31:21 -- scripts/common.sh@365 -- # decimal 2 00:06:10.684 06:31:21 -- scripts/common.sh@352 -- # local d=2 00:06:10.684 06:31:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.684 06:31:21 -- scripts/common.sh@354 -- # echo 2 00:06:10.684 06:31:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:10.684 06:31:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:10.684 06:31:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:10.685 06:31:21 -- scripts/common.sh@367 -- # return 0 00:06:10.685 06:31:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.685 06:31:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:10.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.685 --rc genhtml_branch_coverage=1 00:06:10.685 --rc genhtml_function_coverage=1 00:06:10.685 --rc genhtml_legend=1 00:06:10.685 --rc geninfo_all_blocks=1 00:06:10.685 --rc geninfo_unexecuted_blocks=1 00:06:10.685 00:06:10.685 ' 00:06:10.685 06:31:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:10.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.685 --rc genhtml_branch_coverage=1 00:06:10.685 --rc genhtml_function_coverage=1 00:06:10.685 --rc genhtml_legend=1 00:06:10.685 --rc geninfo_all_blocks=1 00:06:10.685 --rc geninfo_unexecuted_blocks=1 00:06:10.685 00:06:10.685 ' 00:06:10.685 06:31:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:10.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.685 --rc genhtml_branch_coverage=1 00:06:10.685 --rc genhtml_function_coverage=1 00:06:10.685 --rc genhtml_legend=1 00:06:10.685 --rc geninfo_all_blocks=1 00:06:10.685 --rc geninfo_unexecuted_blocks=1 00:06:10.685 00:06:10.685 ' 00:06:10.685 06:31:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:10.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.685 --rc genhtml_branch_coverage=1 00:06:10.685 --rc genhtml_function_coverage=1 00:06:10.685 --rc genhtml_legend=1 00:06:10.685 --rc geninfo_all_blocks=1 00:06:10.685 --rc geninfo_unexecuted_blocks=1 00:06:10.685 00:06:10.685 ' 00:06:10.685 06:31:21 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:10.685 06:31:21 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:10.685 06:31:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.685 06:31:21 -- common/autotest_common.sh@10 -- # set +x 00:06:10.685 ************************************ 00:06:10.685 START TEST thread_poller_perf 00:06:10.685 ************************************ 00:06:10.685 06:31:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:10.685 [2024-11-28 06:31:21.409174] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:10.685 [2024-11-28 06:31:21.409260] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69989 ] 00:06:10.942 [2024-11-28 06:31:21.539269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.942 [2024-11-28 06:31:21.577213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.942 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:11.898 [2024-11-28T06:31:22.668Z] ====================================== 00:06:11.898 [2024-11-28T06:31:22.668Z] busy:2613728048 (cyc) 00:06:11.898 [2024-11-28T06:31:22.668Z] total_run_count: 394000 00:06:11.898 [2024-11-28T06:31:22.668Z] tsc_hz: 2600000000 (cyc) 00:06:11.898 [2024-11-28T06:31:22.668Z] ====================================== 00:06:11.898 [2024-11-28T06:31:22.668Z] poller_cost: 6633 (cyc), 2551 (nsec) 00:06:11.898 00:06:11.898 real 0m1.267s 00:06:11.898 user 0m1.102s 00:06:11.898 sys 0m0.059s 00:06:11.898 06:31:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.898 ************************************ 00:06:11.898 END TEST thread_poller_perf 00:06:11.898 ************************************ 00:06:11.898 06:31:22 -- common/autotest_common.sh@10 -- # set +x 00:06:12.157 06:31:22 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:12.157 06:31:22 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:12.157 06:31:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.157 06:31:22 -- common/autotest_common.sh@10 -- # set +x 00:06:12.157 ************************************ 00:06:12.157 START TEST thread_poller_perf 00:06:12.157 ************************************ 00:06:12.157 06:31:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:12.157 [2024-11-28 06:31:22.718424] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:12.157 [2024-11-28 06:31:22.718628] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70026 ] 00:06:12.157 [2024-11-28 06:31:22.851360] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.157 [2024-11-28 06:31:22.891053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.157 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:13.541 [2024-11-28T06:31:24.311Z] ====================================== 00:06:13.541 [2024-11-28T06:31:24.311Z] busy:2603532948 (cyc) 00:06:13.541 [2024-11-28T06:31:24.311Z] total_run_count: 5340000 00:06:13.541 [2024-11-28T06:31:24.311Z] tsc_hz: 2600000000 (cyc) 00:06:13.541 [2024-11-28T06:31:24.311Z] ====================================== 00:06:13.541 [2024-11-28T06:31:24.311Z] poller_cost: 487 (cyc), 187 (nsec) 00:06:13.541 00:06:13.541 real 0m1.268s 00:06:13.541 user 0m1.112s 00:06:13.541 sys 0m0.050s 00:06:13.541 06:31:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:13.541 ************************************ 00:06:13.541 END TEST thread_poller_perf 00:06:13.541 06:31:23 -- common/autotest_common.sh@10 -- # set +x 00:06:13.541 ************************************ 00:06:13.541 06:31:23 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:13.541 00:06:13.541 real 0m2.761s 00:06:13.541 user 0m2.332s 00:06:13.541 sys 0m0.221s 00:06:13.541 06:31:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:13.541 06:31:23 -- common/autotest_common.sh@10 -- # set +x 00:06:13.541 ************************************ 00:06:13.541 END TEST thread 00:06:13.541 ************************************ 00:06:13.541 06:31:24 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:13.541 06:31:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:13.541 06:31:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.541 06:31:24 -- common/autotest_common.sh@10 -- # set +x 00:06:13.541 ************************************ 00:06:13.541 START TEST accel 00:06:13.541 ************************************ 00:06:13.541 06:31:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:13.541 * Looking for test storage... 00:06:13.541 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:13.541 06:31:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:13.541 06:31:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:13.541 06:31:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:13.541 06:31:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:13.541 06:31:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:13.541 06:31:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:13.541 06:31:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:13.541 06:31:24 -- scripts/common.sh@335 -- # IFS=.-: 00:06:13.541 06:31:24 -- scripts/common.sh@335 -- # read -ra ver1 00:06:13.541 06:31:24 -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.541 06:31:24 -- scripts/common.sh@336 -- # read -ra ver2 00:06:13.541 06:31:24 -- scripts/common.sh@337 -- # local 'op=<' 00:06:13.541 06:31:24 -- scripts/common.sh@339 -- # ver1_l=2 00:06:13.541 06:31:24 -- scripts/common.sh@340 -- # ver2_l=1 00:06:13.541 06:31:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:13.541 06:31:24 -- scripts/common.sh@343 -- # case "$op" in 00:06:13.541 06:31:24 -- scripts/common.sh@344 -- # : 1 00:06:13.541 06:31:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:13.541 06:31:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.541 06:31:24 -- scripts/common.sh@364 -- # decimal 1 00:06:13.541 06:31:24 -- scripts/common.sh@352 -- # local d=1 00:06:13.541 06:31:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.541 06:31:24 -- scripts/common.sh@354 -- # echo 1 00:06:13.541 06:31:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:13.541 06:31:24 -- scripts/common.sh@365 -- # decimal 2 00:06:13.541 06:31:24 -- scripts/common.sh@352 -- # local d=2 00:06:13.541 06:31:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.541 06:31:24 -- scripts/common.sh@354 -- # echo 2 00:06:13.541 06:31:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:13.541 06:31:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:13.541 06:31:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:13.541 06:31:24 -- scripts/common.sh@367 -- # return 0 00:06:13.541 06:31:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.541 06:31:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:13.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.541 --rc genhtml_branch_coverage=1 00:06:13.541 --rc genhtml_function_coverage=1 00:06:13.541 --rc genhtml_legend=1 00:06:13.541 --rc geninfo_all_blocks=1 00:06:13.541 --rc geninfo_unexecuted_blocks=1 00:06:13.541 00:06:13.541 ' 00:06:13.541 06:31:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:13.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.541 --rc genhtml_branch_coverage=1 00:06:13.541 --rc genhtml_function_coverage=1 00:06:13.541 --rc genhtml_legend=1 00:06:13.541 --rc geninfo_all_blocks=1 00:06:13.541 --rc geninfo_unexecuted_blocks=1 00:06:13.541 00:06:13.541 ' 00:06:13.541 06:31:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:13.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.541 --rc genhtml_branch_coverage=1 00:06:13.541 --rc genhtml_function_coverage=1 00:06:13.541 --rc genhtml_legend=1 00:06:13.541 --rc geninfo_all_blocks=1 00:06:13.541 --rc geninfo_unexecuted_blocks=1 00:06:13.541 00:06:13.541 ' 00:06:13.541 06:31:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:13.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.541 --rc genhtml_branch_coverage=1 00:06:13.541 --rc genhtml_function_coverage=1 00:06:13.541 --rc genhtml_legend=1 00:06:13.541 --rc geninfo_all_blocks=1 00:06:13.541 --rc geninfo_unexecuted_blocks=1 00:06:13.541 00:06:13.541 ' 00:06:13.541 06:31:24 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:13.541 06:31:24 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:13.541 06:31:24 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:13.541 06:31:24 -- accel/accel.sh@59 -- # spdk_tgt_pid=70108 00:06:13.541 06:31:24 -- accel/accel.sh@60 -- # waitforlisten 70108 00:06:13.541 06:31:24 -- common/autotest_common.sh@829 -- # '[' -z 70108 ']' 00:06:13.541 06:31:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.541 06:31:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.541 06:31:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.541 06:31:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.541 06:31:24 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:13.541 06:31:24 -- common/autotest_common.sh@10 -- # set +x 00:06:13.541 06:31:24 -- accel/accel.sh@58 -- # build_accel_config 00:06:13.541 06:31:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.541 06:31:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.541 06:31:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.541 06:31:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.541 06:31:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.541 06:31:24 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.541 06:31:24 -- accel/accel.sh@42 -- # jq -r . 00:06:13.541 [2024-11-28 06:31:24.227130] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:13.541 [2024-11-28 06:31:24.227244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70108 ] 00:06:13.799 [2024-11-28 06:31:24.359665] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.799 [2024-11-28 06:31:24.399172] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:13.799 [2024-11-28 06:31:24.399354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.367 06:31:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.367 06:31:25 -- common/autotest_common.sh@862 -- # return 0 00:06:14.367 06:31:25 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:14.367 06:31:25 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:14.367 06:31:25 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:14.367 06:31:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.367 06:31:25 -- common/autotest_common.sh@10 -- # set +x 00:06:14.367 06:31:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.367 06:31:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.367 06:31:25 -- accel/accel.sh@64 -- # IFS== 00:06:14.367 06:31:25 -- accel/accel.sh@64 -- # read -r opc module 00:06:14.367 06:31:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:14.367 06:31:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.367 06:31:25 -- accel/accel.sh@64 -- # IFS== 00:06:14.367 06:31:25 -- accel/accel.sh@64 -- # read -r opc module 00:06:14.367 06:31:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:14.367 06:31:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.367 06:31:25 -- accel/accel.sh@64 -- # IFS== 00:06:14.367 06:31:25 -- accel/accel.sh@64 -- # read -r opc module 00:06:14.367 06:31:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:14.367 06:31:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.367 06:31:25 -- accel/accel.sh@64 -- # IFS== 00:06:14.367 06:31:25 -- accel/accel.sh@64 -- # read -r opc module 00:06:14.367 06:31:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:14.367 06:31:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.367 06:31:25 -- accel/accel.sh@64 -- # IFS== 00:06:14.367 06:31:25 -- accel/accel.sh@64 -- # read -r opc module 00:06:14.367 06:31:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:14.367 06:31:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.367 06:31:25 -- accel/accel.sh@64 -- # IFS== 00:06:14.367 06:31:25 -- accel/accel.sh@64 -- # read -r opc module 00:06:14.367 06:31:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:14.367 06:31:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.367 06:31:25 -- accel/accel.sh@64 -- # IFS== 00:06:14.367 06:31:25 -- accel/accel.sh@64 -- # read -r opc module 00:06:14.367 06:31:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:14.367 06:31:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.367 06:31:25 -- accel/accel.sh@64 -- # IFS== 00:06:14.367 06:31:25 -- accel/accel.sh@64 -- # read -r opc module 00:06:14.367 06:31:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:14.367 06:31:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.367 06:31:25 -- accel/accel.sh@64 -- # IFS== 00:06:14.367 06:31:25 -- accel/accel.sh@64 -- # read -r opc module 00:06:14.367 06:31:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:14.367 06:31:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.367 06:31:25 -- accel/accel.sh@64 -- # IFS== 00:06:14.367 06:31:25 -- accel/accel.sh@64 -- # read -r opc module 00:06:14.367 06:31:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:14.367 06:31:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.367 06:31:25 -- accel/accel.sh@64 -- # IFS== 00:06:14.367 06:31:25 -- accel/accel.sh@64 -- # read -r opc module 00:06:14.367 06:31:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:14.367 06:31:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.367 06:31:25 -- accel/accel.sh@64 -- # IFS== 00:06:14.367 06:31:25 -- accel/accel.sh@64 -- # read -r opc module 00:06:14.368 06:31:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:14.368 06:31:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.368 06:31:25 -- accel/accel.sh@64 -- # IFS== 00:06:14.368 06:31:25 -- accel/accel.sh@64 -- # read -r opc module 00:06:14.368 06:31:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:14.368 06:31:25 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.368 06:31:25 -- accel/accel.sh@64 -- # IFS== 00:06:14.368 06:31:25 -- accel/accel.sh@64 -- # read -r opc module 00:06:14.368 06:31:25 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:14.368 06:31:25 -- accel/accel.sh@67 -- # killprocess 70108 00:06:14.368 06:31:25 -- common/autotest_common.sh@936 -- # '[' -z 70108 ']' 00:06:14.368 06:31:25 -- common/autotest_common.sh@940 -- # kill -0 70108 00:06:14.368 06:31:25 -- common/autotest_common.sh@941 -- # uname 00:06:14.368 06:31:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:14.368 06:31:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70108 00:06:14.368 06:31:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:14.368 06:31:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:14.368 killing process with pid 70108 00:06:14.368 06:31:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70108' 00:06:14.368 06:31:25 -- common/autotest_common.sh@955 -- # kill 70108 00:06:14.368 06:31:25 -- common/autotest_common.sh@960 -- # wait 70108 00:06:14.934 06:31:25 -- accel/accel.sh@68 -- # trap - ERR 00:06:14.934 06:31:25 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:14.934 06:31:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:14.934 06:31:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.934 06:31:25 -- common/autotest_common.sh@10 -- # set +x 00:06:14.934 06:31:25 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:14.934 06:31:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.934 06:31:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.934 06:31:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:14.934 06:31:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.934 06:31:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.934 06:31:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.934 06:31:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.934 06:31:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.934 06:31:25 -- accel/accel.sh@42 -- # jq -r . 00:06:14.934 06:31:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.934 06:31:25 -- common/autotest_common.sh@10 -- # set +x 00:06:14.934 06:31:25 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:14.934 06:31:25 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:14.934 06:31:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.934 06:31:25 -- common/autotest_common.sh@10 -- # set +x 00:06:14.934 ************************************ 00:06:14.934 START TEST accel_missing_filename 00:06:14.934 ************************************ 00:06:14.934 06:31:25 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:14.934 06:31:25 -- common/autotest_common.sh@650 -- # local es=0 00:06:14.934 06:31:25 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:14.934 06:31:25 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:14.934 06:31:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.934 06:31:25 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:14.934 06:31:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.934 06:31:25 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:14.934 06:31:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:14.934 06:31:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.934 06:31:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.934 06:31:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.934 06:31:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.934 06:31:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.934 06:31:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.934 06:31:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.934 06:31:25 -- accel/accel.sh@42 -- # jq -r . 00:06:14.934 [2024-11-28 06:31:25.548757] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:14.934 [2024-11-28 06:31:25.549140] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70156 ] 00:06:14.934 [2024-11-28 06:31:25.682512] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.193 [2024-11-28 06:31:25.721882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.193 [2024-11-28 06:31:25.762060] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:15.193 [2024-11-28 06:31:25.814787] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:15.193 A filename is required. 00:06:15.193 06:31:25 -- common/autotest_common.sh@653 -- # es=234 00:06:15.193 06:31:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:15.193 06:31:25 -- common/autotest_common.sh@662 -- # es=106 00:06:15.193 06:31:25 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:15.193 06:31:25 -- common/autotest_common.sh@670 -- # es=1 00:06:15.193 06:31:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:15.193 00:06:15.193 real 0m0.374s 00:06:15.193 user 0m0.187s 00:06:15.193 sys 0m0.111s 00:06:15.193 06:31:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:15.193 06:31:25 -- common/autotest_common.sh@10 -- # set +x 00:06:15.193 ************************************ 00:06:15.193 END TEST accel_missing_filename 00:06:15.193 ************************************ 00:06:15.193 06:31:25 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:15.193 06:31:25 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:15.193 06:31:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.193 06:31:25 -- common/autotest_common.sh@10 -- # set +x 00:06:15.193 ************************************ 00:06:15.193 START TEST accel_compress_verify 00:06:15.193 ************************************ 00:06:15.194 06:31:25 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:15.194 06:31:25 -- common/autotest_common.sh@650 -- # local es=0 00:06:15.194 06:31:25 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:15.194 06:31:25 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:15.194 06:31:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.194 06:31:25 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:15.194 06:31:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.194 06:31:25 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:15.194 06:31:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:15.194 06:31:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.194 06:31:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.194 06:31:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.194 06:31:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.194 06:31:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.194 06:31:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.194 06:31:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.194 06:31:25 -- accel/accel.sh@42 -- # jq -r . 00:06:15.194 [2024-11-28 06:31:25.958972] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:15.194 [2024-11-28 06:31:25.959083] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70182 ] 00:06:15.452 [2024-11-28 06:31:26.091428] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.452 [2024-11-28 06:31:26.131358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.452 [2024-11-28 06:31:26.171735] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:15.711 [2024-11-28 06:31:26.224987] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:15.711 00:06:15.711 Compression does not support the verify option, aborting. 00:06:15.711 06:31:26 -- common/autotest_common.sh@653 -- # es=161 00:06:15.711 06:31:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:15.711 06:31:26 -- common/autotest_common.sh@662 -- # es=33 00:06:15.711 06:31:26 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:15.711 06:31:26 -- common/autotest_common.sh@670 -- # es=1 00:06:15.711 06:31:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:15.711 00:06:15.711 real 0m0.370s 00:06:15.711 user 0m0.181s 00:06:15.711 sys 0m0.112s 00:06:15.711 06:31:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:15.711 06:31:26 -- common/autotest_common.sh@10 -- # set +x 00:06:15.711 ************************************ 00:06:15.711 END TEST accel_compress_verify 00:06:15.711 ************************************ 00:06:15.711 06:31:26 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:15.711 06:31:26 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:15.711 06:31:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.711 06:31:26 -- common/autotest_common.sh@10 -- # set +x 00:06:15.711 ************************************ 00:06:15.711 START TEST accel_wrong_workload 00:06:15.711 ************************************ 00:06:15.711 06:31:26 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:15.711 06:31:26 -- common/autotest_common.sh@650 -- # local es=0 00:06:15.711 06:31:26 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:15.711 06:31:26 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:15.711 06:31:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.711 06:31:26 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:15.711 06:31:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.711 06:31:26 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:15.711 06:31:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:15.711 06:31:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.711 06:31:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.711 06:31:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.711 06:31:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.711 06:31:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.711 06:31:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.711 06:31:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.711 06:31:26 -- accel/accel.sh@42 -- # jq -r . 00:06:15.711 Unsupported workload type: foobar 00:06:15.711 [2024-11-28 06:31:26.367744] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:15.711 accel_perf options: 00:06:15.711 [-h help message] 00:06:15.711 [-q queue depth per core] 00:06:15.711 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:15.711 [-T number of threads per core 00:06:15.711 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:15.711 [-t time in seconds] 00:06:15.711 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:15.711 [ dif_verify, , dif_generate, dif_generate_copy 00:06:15.711 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:15.711 [-l for compress/decompress workloads, name of uncompressed input file 00:06:15.711 [-S for crc32c workload, use this seed value (default 0) 00:06:15.711 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:15.711 [-f for fill workload, use this BYTE value (default 255) 00:06:15.711 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:15.711 [-y verify result if this switch is on] 00:06:15.711 [-a tasks to allocate per core (default: same value as -q)] 00:06:15.711 Can be used to spread operations across a wider range of memory. 00:06:15.711 06:31:26 -- common/autotest_common.sh@653 -- # es=1 00:06:15.711 06:31:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:15.711 06:31:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:15.711 06:31:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:15.711 00:06:15.711 real 0m0.043s 00:06:15.711 user 0m0.042s 00:06:15.711 sys 0m0.028s 00:06:15.711 06:31:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:15.711 ************************************ 00:06:15.711 END TEST accel_wrong_workload 00:06:15.711 06:31:26 -- common/autotest_common.sh@10 -- # set +x 00:06:15.711 ************************************ 00:06:15.711 06:31:26 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:15.711 06:31:26 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:15.711 06:31:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.711 06:31:26 -- common/autotest_common.sh@10 -- # set +x 00:06:15.711 ************************************ 00:06:15.711 START TEST accel_negative_buffers 00:06:15.711 ************************************ 00:06:15.711 06:31:26 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:15.711 06:31:26 -- common/autotest_common.sh@650 -- # local es=0 00:06:15.711 06:31:26 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:15.711 06:31:26 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:15.711 06:31:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.711 06:31:26 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:15.711 06:31:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.711 06:31:26 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:15.711 06:31:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:15.711 06:31:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.711 06:31:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.711 06:31:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.711 06:31:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.711 06:31:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.711 06:31:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.711 06:31:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.711 06:31:26 -- accel/accel.sh@42 -- # jq -r . 00:06:15.711 -x option must be non-negative. 00:06:15.711 [2024-11-28 06:31:26.448974] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:15.711 accel_perf options: 00:06:15.711 [-h help message] 00:06:15.711 [-q queue depth per core] 00:06:15.711 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:15.711 [-T number of threads per core 00:06:15.711 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:15.711 [-t time in seconds] 00:06:15.711 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:15.711 [ dif_verify, , dif_generate, dif_generate_copy 00:06:15.711 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:15.711 [-l for compress/decompress workloads, name of uncompressed input file 00:06:15.711 [-S for crc32c workload, use this seed value (default 0) 00:06:15.711 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:15.711 [-f for fill workload, use this BYTE value (default 255) 00:06:15.712 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:15.712 [-y verify result if this switch is on] 00:06:15.712 [-a tasks to allocate per core (default: same value as -q)] 00:06:15.712 Can be used to spread operations across a wider range of memory. 00:06:15.712 06:31:26 -- common/autotest_common.sh@653 -- # es=1 00:06:15.712 06:31:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:15.712 06:31:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:15.712 06:31:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:15.712 00:06:15.712 real 0m0.046s 00:06:15.712 user 0m0.049s 00:06:15.712 sys 0m0.024s 00:06:15.712 06:31:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:15.712 06:31:26 -- common/autotest_common.sh@10 -- # set +x 00:06:15.712 ************************************ 00:06:15.712 END TEST accel_negative_buffers 00:06:15.712 ************************************ 00:06:15.970 06:31:26 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:15.970 06:31:26 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:15.970 06:31:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.970 06:31:26 -- common/autotest_common.sh@10 -- # set +x 00:06:15.970 ************************************ 00:06:15.970 START TEST accel_crc32c 00:06:15.970 ************************************ 00:06:15.970 06:31:26 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:15.970 06:31:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:15.970 06:31:26 -- accel/accel.sh@17 -- # local accel_module 00:06:15.970 06:31:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:15.970 06:31:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:15.970 06:31:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.970 06:31:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.970 06:31:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.970 06:31:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.970 06:31:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.970 06:31:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.970 06:31:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.970 06:31:26 -- accel/accel.sh@42 -- # jq -r . 00:06:15.970 [2024-11-28 06:31:26.534004] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:15.970 [2024-11-28 06:31:26.534141] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70243 ] 00:06:15.970 [2024-11-28 06:31:26.667445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.970 [2024-11-28 06:31:26.709236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.346 06:31:27 -- accel/accel.sh@18 -- # out=' 00:06:17.346 SPDK Configuration: 00:06:17.346 Core mask: 0x1 00:06:17.346 00:06:17.346 Accel Perf Configuration: 00:06:17.346 Workload Type: crc32c 00:06:17.346 CRC-32C seed: 32 00:06:17.346 Transfer size: 4096 bytes 00:06:17.346 Vector count 1 00:06:17.346 Module: software 00:06:17.346 Queue depth: 32 00:06:17.346 Allocate depth: 32 00:06:17.346 # threads/core: 1 00:06:17.346 Run time: 1 seconds 00:06:17.346 Verify: Yes 00:06:17.346 00:06:17.346 Running for 1 seconds... 00:06:17.346 00:06:17.346 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:17.346 ------------------------------------------------------------------------------------ 00:06:17.346 0,0 455360/s 1778 MiB/s 0 0 00:06:17.346 ==================================================================================== 00:06:17.346 Total 455360/s 1778 MiB/s 0 0' 00:06:17.346 06:31:27 -- accel/accel.sh@20 -- # IFS=: 00:06:17.346 06:31:27 -- accel/accel.sh@20 -- # read -r var val 00:06:17.346 06:31:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:17.346 06:31:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:17.346 06:31:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.346 06:31:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.346 06:31:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.346 06:31:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.346 06:31:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.346 06:31:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.346 06:31:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.346 06:31:27 -- accel/accel.sh@42 -- # jq -r . 00:06:17.346 [2024-11-28 06:31:27.920821] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:17.346 [2024-11-28 06:31:27.920934] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70264 ] 00:06:17.346 [2024-11-28 06:31:28.053932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.346 [2024-11-28 06:31:28.094886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.605 06:31:28 -- accel/accel.sh@21 -- # val= 00:06:17.605 06:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.605 06:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:17.605 06:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:17.605 06:31:28 -- accel/accel.sh@21 -- # val= 00:06:17.605 06:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.605 06:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:17.605 06:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:17.605 06:31:28 -- accel/accel.sh@21 -- # val=0x1 00:06:17.605 06:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.605 06:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:17.605 06:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:17.605 06:31:28 -- accel/accel.sh@21 -- # val= 00:06:17.605 06:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.605 06:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:17.605 06:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:17.605 06:31:28 -- accel/accel.sh@21 -- # val= 00:06:17.605 06:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.605 06:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:17.605 06:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:17.605 06:31:28 -- accel/accel.sh@21 -- # val=crc32c 00:06:17.605 06:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.605 06:31:28 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:17.605 06:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:17.605 06:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:17.605 06:31:28 -- accel/accel.sh@21 -- # val=32 00:06:17.605 06:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.605 06:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:17.605 06:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:17.605 06:31:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:17.605 06:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.605 06:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:17.605 06:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:17.605 06:31:28 -- accel/accel.sh@21 -- # val= 00:06:17.605 06:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.605 06:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:17.605 06:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:17.605 06:31:28 -- accel/accel.sh@21 -- # val=software 00:06:17.605 06:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.605 06:31:28 -- accel/accel.sh@23 -- # accel_module=software 00:06:17.605 06:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:17.605 06:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:17.605 06:31:28 -- accel/accel.sh@21 -- # val=32 00:06:17.606 06:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.606 06:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:17.606 06:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:17.606 06:31:28 -- accel/accel.sh@21 -- # val=32 00:06:17.606 06:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.606 06:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:17.606 06:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:17.606 06:31:28 -- accel/accel.sh@21 -- # val=1 00:06:17.606 06:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.606 06:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:17.606 06:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:17.606 06:31:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:17.606 06:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.606 06:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:17.606 06:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:17.606 06:31:28 -- accel/accel.sh@21 -- # val=Yes 00:06:17.606 06:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.606 06:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:17.606 06:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:17.606 06:31:28 -- accel/accel.sh@21 -- # val= 00:06:17.606 06:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.606 06:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:17.606 06:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:17.606 06:31:28 -- accel/accel.sh@21 -- # val= 00:06:17.606 06:31:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.606 06:31:28 -- accel/accel.sh@20 -- # IFS=: 00:06:17.606 06:31:28 -- accel/accel.sh@20 -- # read -r var val 00:06:18.541 06:31:29 -- accel/accel.sh@21 -- # val= 00:06:18.541 06:31:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.541 06:31:29 -- accel/accel.sh@20 -- # IFS=: 00:06:18.541 06:31:29 -- accel/accel.sh@20 -- # read -r var val 00:06:18.541 06:31:29 -- accel/accel.sh@21 -- # val= 00:06:18.541 06:31:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.541 06:31:29 -- accel/accel.sh@20 -- # IFS=: 00:06:18.541 06:31:29 -- accel/accel.sh@20 -- # read -r var val 00:06:18.541 06:31:29 -- accel/accel.sh@21 -- # val= 00:06:18.541 06:31:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.541 06:31:29 -- accel/accel.sh@20 -- # IFS=: 00:06:18.541 06:31:29 -- accel/accel.sh@20 -- # read -r var val 00:06:18.541 06:31:29 -- accel/accel.sh@21 -- # val= 00:06:18.541 06:31:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.541 06:31:29 -- accel/accel.sh@20 -- # IFS=: 00:06:18.541 06:31:29 -- accel/accel.sh@20 -- # read -r var val 00:06:18.541 06:31:29 -- accel/accel.sh@21 -- # val= 00:06:18.541 06:31:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.541 06:31:29 -- accel/accel.sh@20 -- # IFS=: 00:06:18.541 06:31:29 -- accel/accel.sh@20 -- # read -r var val 00:06:18.541 06:31:29 -- accel/accel.sh@21 -- # val= 00:06:18.541 06:31:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.541 06:31:29 -- accel/accel.sh@20 -- # IFS=: 00:06:18.541 06:31:29 -- accel/accel.sh@20 -- # read -r var val 00:06:18.541 06:31:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:18.541 06:31:29 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:18.541 06:31:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.541 00:06:18.541 real 0m2.762s 00:06:18.541 user 0m2.338s 00:06:18.541 sys 0m0.222s 00:06:18.541 06:31:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:18.541 06:31:29 -- common/autotest_common.sh@10 -- # set +x 00:06:18.541 ************************************ 00:06:18.541 END TEST accel_crc32c 00:06:18.541 ************************************ 00:06:18.541 06:31:29 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:18.541 06:31:29 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:18.541 06:31:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.541 06:31:29 -- common/autotest_common.sh@10 -- # set +x 00:06:18.541 ************************************ 00:06:18.541 START TEST accel_crc32c_C2 00:06:18.541 ************************************ 00:06:18.541 06:31:29 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:18.541 06:31:29 -- accel/accel.sh@16 -- # local accel_opc 00:06:18.541 06:31:29 -- accel/accel.sh@17 -- # local accel_module 00:06:18.541 06:31:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:18.541 06:31:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:18.541 06:31:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.541 06:31:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.541 06:31:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.541 06:31:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.541 06:31:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.541 06:31:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.541 06:31:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.541 06:31:29 -- accel/accel.sh@42 -- # jq -r . 00:06:18.799 [2024-11-28 06:31:29.330374] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:18.799 [2024-11-28 06:31:29.330881] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70299 ] 00:06:18.799 [2024-11-28 06:31:29.457573] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.799 [2024-11-28 06:31:29.496340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.211 06:31:30 -- accel/accel.sh@18 -- # out=' 00:06:20.211 SPDK Configuration: 00:06:20.211 Core mask: 0x1 00:06:20.211 00:06:20.211 Accel Perf Configuration: 00:06:20.211 Workload Type: crc32c 00:06:20.211 CRC-32C seed: 0 00:06:20.211 Transfer size: 4096 bytes 00:06:20.211 Vector count 2 00:06:20.211 Module: software 00:06:20.211 Queue depth: 32 00:06:20.211 Allocate depth: 32 00:06:20.211 # threads/core: 1 00:06:20.211 Run time: 1 seconds 00:06:20.211 Verify: Yes 00:06:20.211 00:06:20.211 Running for 1 seconds... 00:06:20.211 00:06:20.211 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:20.211 ------------------------------------------------------------------------------------ 00:06:20.211 0,0 497952/s 3890 MiB/s 0 0 00:06:20.211 ==================================================================================== 00:06:20.211 Total 497952/s 1945 MiB/s 0 0' 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # IFS=: 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # read -r var val 00:06:20.211 06:31:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:20.211 06:31:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:20.211 06:31:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.211 06:31:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.211 06:31:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.211 06:31:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.211 06:31:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.211 06:31:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.211 06:31:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.211 06:31:30 -- accel/accel.sh@42 -- # jq -r . 00:06:20.211 [2024-11-28 06:31:30.694344] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:20.211 [2024-11-28 06:31:30.694452] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70314 ] 00:06:20.211 [2024-11-28 06:31:30.821085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.211 [2024-11-28 06:31:30.860193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.211 06:31:30 -- accel/accel.sh@21 -- # val= 00:06:20.211 06:31:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # IFS=: 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # read -r var val 00:06:20.211 06:31:30 -- accel/accel.sh@21 -- # val= 00:06:20.211 06:31:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # IFS=: 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # read -r var val 00:06:20.211 06:31:30 -- accel/accel.sh@21 -- # val=0x1 00:06:20.211 06:31:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # IFS=: 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # read -r var val 00:06:20.211 06:31:30 -- accel/accel.sh@21 -- # val= 00:06:20.211 06:31:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # IFS=: 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # read -r var val 00:06:20.211 06:31:30 -- accel/accel.sh@21 -- # val= 00:06:20.211 06:31:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # IFS=: 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # read -r var val 00:06:20.211 06:31:30 -- accel/accel.sh@21 -- # val=crc32c 00:06:20.211 06:31:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.211 06:31:30 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # IFS=: 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # read -r var val 00:06:20.211 06:31:30 -- accel/accel.sh@21 -- # val=0 00:06:20.211 06:31:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # IFS=: 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # read -r var val 00:06:20.211 06:31:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:20.211 06:31:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # IFS=: 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # read -r var val 00:06:20.211 06:31:30 -- accel/accel.sh@21 -- # val= 00:06:20.211 06:31:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # IFS=: 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # read -r var val 00:06:20.211 06:31:30 -- accel/accel.sh@21 -- # val=software 00:06:20.211 06:31:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.211 06:31:30 -- accel/accel.sh@23 -- # accel_module=software 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # IFS=: 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # read -r var val 00:06:20.211 06:31:30 -- accel/accel.sh@21 -- # val=32 00:06:20.211 06:31:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # IFS=: 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # read -r var val 00:06:20.211 06:31:30 -- accel/accel.sh@21 -- # val=32 00:06:20.211 06:31:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # IFS=: 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # read -r var val 00:06:20.211 06:31:30 -- accel/accel.sh@21 -- # val=1 00:06:20.211 06:31:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # IFS=: 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # read -r var val 00:06:20.211 06:31:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:20.211 06:31:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # IFS=: 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # read -r var val 00:06:20.211 06:31:30 -- accel/accel.sh@21 -- # val=Yes 00:06:20.211 06:31:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # IFS=: 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # read -r var val 00:06:20.211 06:31:30 -- accel/accel.sh@21 -- # val= 00:06:20.211 06:31:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # IFS=: 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # read -r var val 00:06:20.211 06:31:30 -- accel/accel.sh@21 -- # val= 00:06:20.211 06:31:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # IFS=: 00:06:20.211 06:31:30 -- accel/accel.sh@20 -- # read -r var val 00:06:21.598 06:31:32 -- accel/accel.sh@21 -- # val= 00:06:21.598 06:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.598 06:31:32 -- accel/accel.sh@20 -- # IFS=: 00:06:21.598 06:31:32 -- accel/accel.sh@20 -- # read -r var val 00:06:21.598 06:31:32 -- accel/accel.sh@21 -- # val= 00:06:21.598 06:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.598 06:31:32 -- accel/accel.sh@20 -- # IFS=: 00:06:21.598 06:31:32 -- accel/accel.sh@20 -- # read -r var val 00:06:21.598 06:31:32 -- accel/accel.sh@21 -- # val= 00:06:21.598 06:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.598 06:31:32 -- accel/accel.sh@20 -- # IFS=: 00:06:21.598 06:31:32 -- accel/accel.sh@20 -- # read -r var val 00:06:21.598 06:31:32 -- accel/accel.sh@21 -- # val= 00:06:21.598 06:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.598 06:31:32 -- accel/accel.sh@20 -- # IFS=: 00:06:21.598 06:31:32 -- accel/accel.sh@20 -- # read -r var val 00:06:21.598 06:31:32 -- accel/accel.sh@21 -- # val= 00:06:21.598 06:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.598 06:31:32 -- accel/accel.sh@20 -- # IFS=: 00:06:21.598 06:31:32 -- accel/accel.sh@20 -- # read -r var val 00:06:21.598 06:31:32 -- accel/accel.sh@21 -- # val= 00:06:21.598 06:31:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.598 06:31:32 -- accel/accel.sh@20 -- # IFS=: 00:06:21.598 06:31:32 -- accel/accel.sh@20 -- # read -r var val 00:06:21.598 06:31:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:21.598 06:31:32 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:21.598 06:31:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.598 00:06:21.598 real 0m2.725s 00:06:21.598 user 0m2.302s 00:06:21.598 sys 0m0.221s 00:06:21.598 06:31:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:21.598 ************************************ 00:06:21.598 END TEST accel_crc32c_C2 00:06:21.598 ************************************ 00:06:21.598 06:31:32 -- common/autotest_common.sh@10 -- # set +x 00:06:21.598 06:31:32 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:21.598 06:31:32 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:21.598 06:31:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.598 06:31:32 -- common/autotest_common.sh@10 -- # set +x 00:06:21.598 ************************************ 00:06:21.598 START TEST accel_copy 00:06:21.598 ************************************ 00:06:21.598 06:31:32 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:21.598 06:31:32 -- accel/accel.sh@16 -- # local accel_opc 00:06:21.598 06:31:32 -- accel/accel.sh@17 -- # local accel_module 00:06:21.598 06:31:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:21.598 06:31:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:21.598 06:31:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.598 06:31:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:21.598 06:31:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.598 06:31:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.598 06:31:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:21.598 06:31:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:21.598 06:31:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:21.598 06:31:32 -- accel/accel.sh@42 -- # jq -r . 00:06:21.598 [2024-11-28 06:31:32.097144] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:21.598 [2024-11-28 06:31:32.097260] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70350 ] 00:06:21.598 [2024-11-28 06:31:32.231157] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.598 [2024-11-28 06:31:32.269952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.973 06:31:33 -- accel/accel.sh@18 -- # out=' 00:06:22.973 SPDK Configuration: 00:06:22.973 Core mask: 0x1 00:06:22.973 00:06:22.973 Accel Perf Configuration: 00:06:22.973 Workload Type: copy 00:06:22.973 Transfer size: 4096 bytes 00:06:22.973 Vector count 1 00:06:22.973 Module: software 00:06:22.973 Queue depth: 32 00:06:22.973 Allocate depth: 32 00:06:22.973 # threads/core: 1 00:06:22.973 Run time: 1 seconds 00:06:22.973 Verify: Yes 00:06:22.973 00:06:22.973 Running for 1 seconds... 00:06:22.973 00:06:22.973 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:22.973 ------------------------------------------------------------------------------------ 00:06:22.973 0,0 383744/s 1499 MiB/s 0 0 00:06:22.973 ==================================================================================== 00:06:22.973 Total 383744/s 1499 MiB/s 0 0' 00:06:22.973 06:31:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:22.973 06:31:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.973 06:31:33 -- accel/accel.sh@20 -- # read -r var val 00:06:22.973 06:31:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:22.973 06:31:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.973 06:31:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.973 06:31:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.973 06:31:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.973 06:31:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.973 06:31:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.973 06:31:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.973 06:31:33 -- accel/accel.sh@42 -- # jq -r . 00:06:22.973 [2024-11-28 06:31:33.451747] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:22.973 [2024-11-28 06:31:33.451863] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70370 ] 00:06:22.973 [2024-11-28 06:31:33.583373] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.973 [2024-11-28 06:31:33.622341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.973 06:31:33 -- accel/accel.sh@21 -- # val= 00:06:22.973 06:31:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.973 06:31:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.973 06:31:33 -- accel/accel.sh@20 -- # read -r var val 00:06:22.973 06:31:33 -- accel/accel.sh@21 -- # val= 00:06:22.973 06:31:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.973 06:31:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.973 06:31:33 -- accel/accel.sh@20 -- # read -r var val 00:06:22.973 06:31:33 -- accel/accel.sh@21 -- # val=0x1 00:06:22.973 06:31:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.973 06:31:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.973 06:31:33 -- accel/accel.sh@20 -- # read -r var val 00:06:22.973 06:31:33 -- accel/accel.sh@21 -- # val= 00:06:22.973 06:31:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.973 06:31:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.973 06:31:33 -- accel/accel.sh@20 -- # read -r var val 00:06:22.973 06:31:33 -- accel/accel.sh@21 -- # val= 00:06:22.973 06:31:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.973 06:31:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.973 06:31:33 -- accel/accel.sh@20 -- # read -r var val 00:06:22.974 06:31:33 -- accel/accel.sh@21 -- # val=copy 00:06:22.974 06:31:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.974 06:31:33 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:22.974 06:31:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.974 06:31:33 -- accel/accel.sh@20 -- # read -r var val 00:06:22.974 06:31:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:22.974 06:31:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.974 06:31:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.974 06:31:33 -- accel/accel.sh@20 -- # read -r var val 00:06:22.974 06:31:33 -- accel/accel.sh@21 -- # val= 00:06:22.974 06:31:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.974 06:31:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.974 06:31:33 -- accel/accel.sh@20 -- # read -r var val 00:06:22.974 06:31:33 -- accel/accel.sh@21 -- # val=software 00:06:22.974 06:31:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.974 06:31:33 -- accel/accel.sh@23 -- # accel_module=software 00:06:22.974 06:31:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.974 06:31:33 -- accel/accel.sh@20 -- # read -r var val 00:06:22.974 06:31:33 -- accel/accel.sh@21 -- # val=32 00:06:22.974 06:31:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.974 06:31:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.974 06:31:33 -- accel/accel.sh@20 -- # read -r var val 00:06:22.974 06:31:33 -- accel/accel.sh@21 -- # val=32 00:06:22.974 06:31:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.974 06:31:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.974 06:31:33 -- accel/accel.sh@20 -- # read -r var val 00:06:22.974 06:31:33 -- accel/accel.sh@21 -- # val=1 00:06:22.974 06:31:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.974 06:31:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.974 06:31:33 -- accel/accel.sh@20 -- # read -r var val 00:06:22.974 06:31:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:22.974 06:31:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.974 06:31:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.974 06:31:33 -- accel/accel.sh@20 -- # read -r var val 00:06:22.974 06:31:33 -- accel/accel.sh@21 -- # val=Yes 00:06:22.974 06:31:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.974 06:31:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.974 06:31:33 -- accel/accel.sh@20 -- # read -r var val 00:06:22.974 06:31:33 -- accel/accel.sh@21 -- # val= 00:06:22.974 06:31:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.974 06:31:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.974 06:31:33 -- accel/accel.sh@20 -- # read -r var val 00:06:22.974 06:31:33 -- accel/accel.sh@21 -- # val= 00:06:22.974 06:31:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.974 06:31:33 -- accel/accel.sh@20 -- # IFS=: 00:06:22.974 06:31:33 -- accel/accel.sh@20 -- # read -r var val 00:06:24.348 06:31:34 -- accel/accel.sh@21 -- # val= 00:06:24.348 06:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.348 06:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:24.348 06:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:24.348 06:31:34 -- accel/accel.sh@21 -- # val= 00:06:24.348 06:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.348 06:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:24.348 06:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:24.348 06:31:34 -- accel/accel.sh@21 -- # val= 00:06:24.348 06:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.348 06:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:24.348 06:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:24.348 06:31:34 -- accel/accel.sh@21 -- # val= 00:06:24.348 06:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.348 06:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:24.348 06:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:24.348 06:31:34 -- accel/accel.sh@21 -- # val= 00:06:24.348 06:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.348 06:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:24.348 06:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:24.348 06:31:34 -- accel/accel.sh@21 -- # val= 00:06:24.348 06:31:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.348 06:31:34 -- accel/accel.sh@20 -- # IFS=: 00:06:24.348 06:31:34 -- accel/accel.sh@20 -- # read -r var val 00:06:24.348 06:31:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:24.348 06:31:34 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:24.348 06:31:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.348 00:06:24.348 real 0m2.719s 00:06:24.348 user 0m2.298s 00:06:24.348 sys 0m0.219s 00:06:24.348 06:31:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:24.348 06:31:34 -- common/autotest_common.sh@10 -- # set +x 00:06:24.348 ************************************ 00:06:24.348 END TEST accel_copy 00:06:24.348 ************************************ 00:06:24.348 06:31:34 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.348 06:31:34 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:24.348 06:31:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.348 06:31:34 -- common/autotest_common.sh@10 -- # set +x 00:06:24.348 ************************************ 00:06:24.348 START TEST accel_fill 00:06:24.348 ************************************ 00:06:24.349 06:31:34 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.349 06:31:34 -- accel/accel.sh@16 -- # local accel_opc 00:06:24.349 06:31:34 -- accel/accel.sh@17 -- # local accel_module 00:06:24.349 06:31:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.349 06:31:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:24.349 06:31:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.349 06:31:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.349 06:31:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.349 06:31:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.349 06:31:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.349 06:31:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.349 06:31:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.349 06:31:34 -- accel/accel.sh@42 -- # jq -r . 00:06:24.349 [2024-11-28 06:31:34.855027] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:24.349 [2024-11-28 06:31:34.855584] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70406 ] 00:06:24.349 [2024-11-28 06:31:34.986750] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.349 [2024-11-28 06:31:35.025559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.722 06:31:36 -- accel/accel.sh@18 -- # out=' 00:06:25.722 SPDK Configuration: 00:06:25.722 Core mask: 0x1 00:06:25.722 00:06:25.722 Accel Perf Configuration: 00:06:25.722 Workload Type: fill 00:06:25.722 Fill pattern: 0x80 00:06:25.722 Transfer size: 4096 bytes 00:06:25.722 Vector count 1 00:06:25.722 Module: software 00:06:25.722 Queue depth: 64 00:06:25.722 Allocate depth: 64 00:06:25.722 # threads/core: 1 00:06:25.722 Run time: 1 seconds 00:06:25.722 Verify: Yes 00:06:25.722 00:06:25.722 Running for 1 seconds... 00:06:25.722 00:06:25.722 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:25.722 ------------------------------------------------------------------------------------ 00:06:25.722 0,0 616768/s 2409 MiB/s 0 0 00:06:25.722 ==================================================================================== 00:06:25.722 Total 616768/s 2409 MiB/s 0 0' 00:06:25.722 06:31:36 -- accel/accel.sh@20 -- # IFS=: 00:06:25.722 06:31:36 -- accel/accel.sh@20 -- # read -r var val 00:06:25.722 06:31:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:25.722 06:31:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:25.722 06:31:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.722 06:31:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.722 06:31:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.722 06:31:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.722 06:31:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.722 06:31:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.722 06:31:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.722 06:31:36 -- accel/accel.sh@42 -- # jq -r . 00:06:25.722 [2024-11-28 06:31:36.223368] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:25.722 [2024-11-28 06:31:36.223959] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70421 ] 00:06:25.722 [2024-11-28 06:31:36.355544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.722 [2024-11-28 06:31:36.394653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.722 06:31:36 -- accel/accel.sh@21 -- # val= 00:06:25.722 06:31:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.722 06:31:36 -- accel/accel.sh@20 -- # IFS=: 00:06:25.722 06:31:36 -- accel/accel.sh@20 -- # read -r var val 00:06:25.722 06:31:36 -- accel/accel.sh@21 -- # val= 00:06:25.722 06:31:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.722 06:31:36 -- accel/accel.sh@20 -- # IFS=: 00:06:25.722 06:31:36 -- accel/accel.sh@20 -- # read -r var val 00:06:25.722 06:31:36 -- accel/accel.sh@21 -- # val=0x1 00:06:25.722 06:31:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.722 06:31:36 -- accel/accel.sh@20 -- # IFS=: 00:06:25.722 06:31:36 -- accel/accel.sh@20 -- # read -r var val 00:06:25.722 06:31:36 -- accel/accel.sh@21 -- # val= 00:06:25.722 06:31:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.722 06:31:36 -- accel/accel.sh@20 -- # IFS=: 00:06:25.722 06:31:36 -- accel/accel.sh@20 -- # read -r var val 00:06:25.722 06:31:36 -- accel/accel.sh@21 -- # val= 00:06:25.722 06:31:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.722 06:31:36 -- accel/accel.sh@20 -- # IFS=: 00:06:25.722 06:31:36 -- accel/accel.sh@20 -- # read -r var val 00:06:25.722 06:31:36 -- accel/accel.sh@21 -- # val=fill 00:06:25.722 06:31:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.722 06:31:36 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:25.722 06:31:36 -- accel/accel.sh@20 -- # IFS=: 00:06:25.722 06:31:36 -- accel/accel.sh@20 -- # read -r var val 00:06:25.722 06:31:36 -- accel/accel.sh@21 -- # val=0x80 00:06:25.722 06:31:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.722 06:31:36 -- accel/accel.sh@20 -- # IFS=: 00:06:25.722 06:31:36 -- accel/accel.sh@20 -- # read -r var val 00:06:25.722 06:31:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:25.722 06:31:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.722 06:31:36 -- accel/accel.sh@20 -- # IFS=: 00:06:25.722 06:31:36 -- accel/accel.sh@20 -- # read -r var val 00:06:25.722 06:31:36 -- accel/accel.sh@21 -- # val= 00:06:25.722 06:31:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.722 06:31:36 -- accel/accel.sh@20 -- # IFS=: 00:06:25.722 06:31:36 -- accel/accel.sh@20 -- # read -r var val 00:06:25.722 06:31:36 -- accel/accel.sh@21 -- # val=software 00:06:25.722 06:31:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.722 06:31:36 -- accel/accel.sh@23 -- # accel_module=software 00:06:25.722 06:31:36 -- accel/accel.sh@20 -- # IFS=: 00:06:25.722 06:31:36 -- accel/accel.sh@20 -- # read -r var val 00:06:25.722 06:31:36 -- accel/accel.sh@21 -- # val=64 00:06:25.722 06:31:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.722 06:31:36 -- accel/accel.sh@20 -- # IFS=: 00:06:25.722 06:31:36 -- accel/accel.sh@20 -- # read -r var val 00:06:25.722 06:31:36 -- accel/accel.sh@21 -- # val=64 00:06:25.722 06:31:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.722 06:31:36 -- accel/accel.sh@20 -- # IFS=: 00:06:25.723 06:31:36 -- accel/accel.sh@20 -- # read -r var val 00:06:25.723 06:31:36 -- accel/accel.sh@21 -- # val=1 00:06:25.723 06:31:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.723 06:31:36 -- accel/accel.sh@20 -- # IFS=: 00:06:25.723 06:31:36 -- accel/accel.sh@20 -- # read -r var val 00:06:25.723 06:31:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:25.723 06:31:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.723 06:31:36 -- accel/accel.sh@20 -- # IFS=: 00:06:25.723 06:31:36 -- accel/accel.sh@20 -- # read -r var val 00:06:25.723 06:31:36 -- accel/accel.sh@21 -- # val=Yes 00:06:25.723 06:31:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.723 06:31:36 -- accel/accel.sh@20 -- # IFS=: 00:06:25.723 06:31:36 -- accel/accel.sh@20 -- # read -r var val 00:06:25.723 06:31:36 -- accel/accel.sh@21 -- # val= 00:06:25.723 06:31:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.723 06:31:36 -- accel/accel.sh@20 -- # IFS=: 00:06:25.723 06:31:36 -- accel/accel.sh@20 -- # read -r var val 00:06:25.723 06:31:36 -- accel/accel.sh@21 -- # val= 00:06:25.723 06:31:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.723 06:31:36 -- accel/accel.sh@20 -- # IFS=: 00:06:25.723 06:31:36 -- accel/accel.sh@20 -- # read -r var val 00:06:27.099 06:31:37 -- accel/accel.sh@21 -- # val= 00:06:27.099 06:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.099 06:31:37 -- accel/accel.sh@20 -- # IFS=: 00:06:27.099 06:31:37 -- accel/accel.sh@20 -- # read -r var val 00:06:27.099 06:31:37 -- accel/accel.sh@21 -- # val= 00:06:27.099 06:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.099 06:31:37 -- accel/accel.sh@20 -- # IFS=: 00:06:27.099 06:31:37 -- accel/accel.sh@20 -- # read -r var val 00:06:27.099 06:31:37 -- accel/accel.sh@21 -- # val= 00:06:27.099 06:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.099 06:31:37 -- accel/accel.sh@20 -- # IFS=: 00:06:27.099 06:31:37 -- accel/accel.sh@20 -- # read -r var val 00:06:27.100 06:31:37 -- accel/accel.sh@21 -- # val= 00:06:27.100 06:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.100 06:31:37 -- accel/accel.sh@20 -- # IFS=: 00:06:27.100 06:31:37 -- accel/accel.sh@20 -- # read -r var val 00:06:27.100 06:31:37 -- accel/accel.sh@21 -- # val= 00:06:27.100 06:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.100 06:31:37 -- accel/accel.sh@20 -- # IFS=: 00:06:27.100 06:31:37 -- accel/accel.sh@20 -- # read -r var val 00:06:27.100 06:31:37 -- accel/accel.sh@21 -- # val= 00:06:27.100 06:31:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.100 06:31:37 -- accel/accel.sh@20 -- # IFS=: 00:06:27.100 06:31:37 -- accel/accel.sh@20 -- # read -r var val 00:06:27.100 06:31:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:27.100 06:31:37 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:27.100 06:31:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.100 00:06:27.100 real 0m2.736s 00:06:27.100 user 0m2.324s 00:06:27.100 sys 0m0.209s 00:06:27.100 06:31:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.100 06:31:37 -- common/autotest_common.sh@10 -- # set +x 00:06:27.100 ************************************ 00:06:27.100 END TEST accel_fill 00:06:27.100 ************************************ 00:06:27.100 06:31:37 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:27.100 06:31:37 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:27.100 06:31:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.100 06:31:37 -- common/autotest_common.sh@10 -- # set +x 00:06:27.100 ************************************ 00:06:27.100 START TEST accel_copy_crc32c 00:06:27.100 ************************************ 00:06:27.100 06:31:37 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:27.100 06:31:37 -- accel/accel.sh@16 -- # local accel_opc 00:06:27.100 06:31:37 -- accel/accel.sh@17 -- # local accel_module 00:06:27.100 06:31:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:27.100 06:31:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:27.100 06:31:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.100 06:31:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.100 06:31:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.100 06:31:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.100 06:31:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.100 06:31:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.100 06:31:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.100 06:31:37 -- accel/accel.sh@42 -- # jq -r . 00:06:27.100 [2024-11-28 06:31:37.629847] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:27.100 [2024-11-28 06:31:37.629950] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70456 ] 00:06:27.100 [2024-11-28 06:31:37.764910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.100 [2024-11-28 06:31:37.806475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.477 06:31:38 -- accel/accel.sh@18 -- # out=' 00:06:28.477 SPDK Configuration: 00:06:28.477 Core mask: 0x1 00:06:28.477 00:06:28.477 Accel Perf Configuration: 00:06:28.477 Workload Type: copy_crc32c 00:06:28.477 CRC-32C seed: 0 00:06:28.477 Vector size: 4096 bytes 00:06:28.477 Transfer size: 4096 bytes 00:06:28.477 Vector count 1 00:06:28.477 Module: software 00:06:28.477 Queue depth: 32 00:06:28.477 Allocate depth: 32 00:06:28.477 # threads/core: 1 00:06:28.477 Run time: 1 seconds 00:06:28.477 Verify: Yes 00:06:28.477 00:06:28.477 Running for 1 seconds... 00:06:28.477 00:06:28.477 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:28.477 ------------------------------------------------------------------------------------ 00:06:28.477 0,0 238336/s 931 MiB/s 0 0 00:06:28.477 ==================================================================================== 00:06:28.477 Total 238336/s 931 MiB/s 0 0' 00:06:28.477 06:31:38 -- accel/accel.sh@20 -- # IFS=: 00:06:28.477 06:31:38 -- accel/accel.sh@20 -- # read -r var val 00:06:28.477 06:31:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:28.477 06:31:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:28.477 06:31:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.477 06:31:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.477 06:31:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.477 06:31:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.477 06:31:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.477 06:31:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.477 06:31:38 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.477 06:31:38 -- accel/accel.sh@42 -- # jq -r . 00:06:28.477 [2024-11-28 06:31:39.005186] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:28.477 [2024-11-28 06:31:39.005299] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70477 ] 00:06:28.477 [2024-11-28 06:31:39.136092] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.477 [2024-11-28 06:31:39.174720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.477 06:31:39 -- accel/accel.sh@21 -- # val= 00:06:28.477 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.477 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:28.477 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:28.477 06:31:39 -- accel/accel.sh@21 -- # val= 00:06:28.477 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.477 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:28.477 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:28.477 06:31:39 -- accel/accel.sh@21 -- # val=0x1 00:06:28.477 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.477 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:28.477 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:28.477 06:31:39 -- accel/accel.sh@21 -- # val= 00:06:28.477 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.477 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:28.477 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:28.477 06:31:39 -- accel/accel.sh@21 -- # val= 00:06:28.477 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.477 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:28.477 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:28.478 06:31:39 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:28.478 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.478 06:31:39 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:28.478 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:28.478 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:28.478 06:31:39 -- accel/accel.sh@21 -- # val=0 00:06:28.478 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.478 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:28.478 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:28.478 06:31:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:28.478 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.478 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:28.478 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:28.478 06:31:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:28.478 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.478 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:28.478 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:28.478 06:31:39 -- accel/accel.sh@21 -- # val= 00:06:28.478 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.478 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:28.478 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:28.478 06:31:39 -- accel/accel.sh@21 -- # val=software 00:06:28.478 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.478 06:31:39 -- accel/accel.sh@23 -- # accel_module=software 00:06:28.478 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:28.478 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:28.478 06:31:39 -- accel/accel.sh@21 -- # val=32 00:06:28.478 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.478 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:28.478 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:28.478 06:31:39 -- accel/accel.sh@21 -- # val=32 00:06:28.478 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.478 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:28.478 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:28.478 06:31:39 -- accel/accel.sh@21 -- # val=1 00:06:28.478 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.478 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:28.478 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:28.478 06:31:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:28.478 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.478 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:28.478 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:28.478 06:31:39 -- accel/accel.sh@21 -- # val=Yes 00:06:28.478 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.478 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:28.478 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:28.478 06:31:39 -- accel/accel.sh@21 -- # val= 00:06:28.478 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.478 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:28.478 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:28.478 06:31:39 -- accel/accel.sh@21 -- # val= 00:06:28.478 06:31:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.478 06:31:39 -- accel/accel.sh@20 -- # IFS=: 00:06:28.478 06:31:39 -- accel/accel.sh@20 -- # read -r var val 00:06:29.853 06:31:40 -- accel/accel.sh@21 -- # val= 00:06:29.853 06:31:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.853 06:31:40 -- accel/accel.sh@20 -- # IFS=: 00:06:29.853 06:31:40 -- accel/accel.sh@20 -- # read -r var val 00:06:29.853 06:31:40 -- accel/accel.sh@21 -- # val= 00:06:29.853 06:31:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.853 06:31:40 -- accel/accel.sh@20 -- # IFS=: 00:06:29.853 06:31:40 -- accel/accel.sh@20 -- # read -r var val 00:06:29.853 06:31:40 -- accel/accel.sh@21 -- # val= 00:06:29.853 06:31:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.853 06:31:40 -- accel/accel.sh@20 -- # IFS=: 00:06:29.853 06:31:40 -- accel/accel.sh@20 -- # read -r var val 00:06:29.853 06:31:40 -- accel/accel.sh@21 -- # val= 00:06:29.853 06:31:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.853 06:31:40 -- accel/accel.sh@20 -- # IFS=: 00:06:29.853 06:31:40 -- accel/accel.sh@20 -- # read -r var val 00:06:29.853 06:31:40 -- accel/accel.sh@21 -- # val= 00:06:29.853 06:31:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.853 06:31:40 -- accel/accel.sh@20 -- # IFS=: 00:06:29.853 06:31:40 -- accel/accel.sh@20 -- # read -r var val 00:06:29.853 06:31:40 -- accel/accel.sh@21 -- # val= 00:06:29.853 06:31:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.853 06:31:40 -- accel/accel.sh@20 -- # IFS=: 00:06:29.853 06:31:40 -- accel/accel.sh@20 -- # read -r var val 00:06:29.853 06:31:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:29.853 06:31:40 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:29.853 06:31:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.853 00:06:29.853 real 0m2.737s 00:06:29.853 user 0m2.332s 00:06:29.853 sys 0m0.204s 00:06:29.853 06:31:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.853 ************************************ 00:06:29.853 END TEST accel_copy_crc32c 00:06:29.853 ************************************ 00:06:29.853 06:31:40 -- common/autotest_common.sh@10 -- # set +x 00:06:29.853 06:31:40 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:29.853 06:31:40 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:29.853 06:31:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.853 06:31:40 -- common/autotest_common.sh@10 -- # set +x 00:06:29.853 ************************************ 00:06:29.853 START TEST accel_copy_crc32c_C2 00:06:29.853 ************************************ 00:06:29.853 06:31:40 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:29.853 06:31:40 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.853 06:31:40 -- accel/accel.sh@17 -- # local accel_module 00:06:29.853 06:31:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:29.853 06:31:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:29.853 06:31:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.853 06:31:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.853 06:31:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.853 06:31:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.853 06:31:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.853 06:31:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.853 06:31:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.853 06:31:40 -- accel/accel.sh@42 -- # jq -r . 00:06:29.853 [2024-11-28 06:31:40.410471] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:29.853 [2024-11-28 06:31:40.410770] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70507 ] 00:06:29.853 [2024-11-28 06:31:40.553558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.853 [2024-11-28 06:31:40.593686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.236 06:31:41 -- accel/accel.sh@18 -- # out=' 00:06:31.236 SPDK Configuration: 00:06:31.236 Core mask: 0x1 00:06:31.236 00:06:31.236 Accel Perf Configuration: 00:06:31.236 Workload Type: copy_crc32c 00:06:31.236 CRC-32C seed: 0 00:06:31.236 Vector size: 4096 bytes 00:06:31.236 Transfer size: 8192 bytes 00:06:31.236 Vector count 2 00:06:31.236 Module: software 00:06:31.236 Queue depth: 32 00:06:31.236 Allocate depth: 32 00:06:31.236 # threads/core: 1 00:06:31.236 Run time: 1 seconds 00:06:31.236 Verify: Yes 00:06:31.236 00:06:31.236 Running for 1 seconds... 00:06:31.236 00:06:31.236 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:31.236 ------------------------------------------------------------------------------------ 00:06:31.236 0,0 193568/s 1512 MiB/s 0 0 00:06:31.236 ==================================================================================== 00:06:31.236 Total 193568/s 756 MiB/s 0 0' 00:06:31.236 06:31:41 -- accel/accel.sh@20 -- # IFS=: 00:06:31.236 06:31:41 -- accel/accel.sh@20 -- # read -r var val 00:06:31.236 06:31:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:31.236 06:31:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:31.236 06:31:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.236 06:31:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.236 06:31:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.236 06:31:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.236 06:31:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.236 06:31:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.236 06:31:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.236 06:31:41 -- accel/accel.sh@42 -- # jq -r . 00:06:31.236 [2024-11-28 06:31:41.790721] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:31.236 [2024-11-28 06:31:41.790827] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70527 ] 00:06:31.236 [2024-11-28 06:31:41.923320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.236 [2024-11-28 06:31:41.961275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.236 06:31:42 -- accel/accel.sh@21 -- # val= 00:06:31.236 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.236 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:31.236 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:31.236 06:31:42 -- accel/accel.sh@21 -- # val= 00:06:31.236 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.236 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:31.236 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:31.236 06:31:42 -- accel/accel.sh@21 -- # val=0x1 00:06:31.236 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.236 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:31.236 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:31.236 06:31:42 -- accel/accel.sh@21 -- # val= 00:06:31.236 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.236 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:31.236 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:31.236 06:31:42 -- accel/accel.sh@21 -- # val= 00:06:31.236 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.236 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:31.236 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:31.496 06:31:42 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:31.496 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.496 06:31:42 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:31.496 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:31.496 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:31.496 06:31:42 -- accel/accel.sh@21 -- # val=0 00:06:31.496 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.496 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:31.496 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:31.496 06:31:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:31.496 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.496 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:31.496 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:31.496 06:31:42 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:31.496 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.496 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:31.496 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:31.496 06:31:42 -- accel/accel.sh@21 -- # val= 00:06:31.496 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.496 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:31.496 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:31.496 06:31:42 -- accel/accel.sh@21 -- # val=software 00:06:31.496 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.496 06:31:42 -- accel/accel.sh@23 -- # accel_module=software 00:06:31.496 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:31.496 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:31.496 06:31:42 -- accel/accel.sh@21 -- # val=32 00:06:31.496 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.496 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:31.496 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:31.496 06:31:42 -- accel/accel.sh@21 -- # val=32 00:06:31.496 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.496 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:31.496 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:31.496 06:31:42 -- accel/accel.sh@21 -- # val=1 00:06:31.496 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.496 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:31.496 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:31.496 06:31:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:31.496 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.496 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:31.496 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:31.496 06:31:42 -- accel/accel.sh@21 -- # val=Yes 00:06:31.496 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.496 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:31.496 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:31.496 06:31:42 -- accel/accel.sh@21 -- # val= 00:06:31.496 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.496 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:31.496 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:31.496 06:31:42 -- accel/accel.sh@21 -- # val= 00:06:31.496 06:31:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.496 06:31:42 -- accel/accel.sh@20 -- # IFS=: 00:06:31.496 06:31:42 -- accel/accel.sh@20 -- # read -r var val 00:06:32.433 06:31:43 -- accel/accel.sh@21 -- # val= 00:06:32.433 06:31:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.433 06:31:43 -- accel/accel.sh@20 -- # IFS=: 00:06:32.433 06:31:43 -- accel/accel.sh@20 -- # read -r var val 00:06:32.433 06:31:43 -- accel/accel.sh@21 -- # val= 00:06:32.433 06:31:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.433 06:31:43 -- accel/accel.sh@20 -- # IFS=: 00:06:32.433 06:31:43 -- accel/accel.sh@20 -- # read -r var val 00:06:32.433 06:31:43 -- accel/accel.sh@21 -- # val= 00:06:32.433 06:31:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.433 06:31:43 -- accel/accel.sh@20 -- # IFS=: 00:06:32.433 06:31:43 -- accel/accel.sh@20 -- # read -r var val 00:06:32.433 06:31:43 -- accel/accel.sh@21 -- # val= 00:06:32.433 06:31:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.433 06:31:43 -- accel/accel.sh@20 -- # IFS=: 00:06:32.433 06:31:43 -- accel/accel.sh@20 -- # read -r var val 00:06:32.433 06:31:43 -- accel/accel.sh@21 -- # val= 00:06:32.433 06:31:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.433 06:31:43 -- accel/accel.sh@20 -- # IFS=: 00:06:32.433 06:31:43 -- accel/accel.sh@20 -- # read -r var val 00:06:32.433 06:31:43 -- accel/accel.sh@21 -- # val= 00:06:32.433 06:31:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.433 06:31:43 -- accel/accel.sh@20 -- # IFS=: 00:06:32.433 06:31:43 -- accel/accel.sh@20 -- # read -r var val 00:06:32.433 06:31:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:32.433 06:31:43 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:32.433 06:31:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.433 00:06:32.433 real 0m2.749s 00:06:32.433 user 0m2.332s 00:06:32.433 sys 0m0.211s 00:06:32.433 06:31:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:32.433 06:31:43 -- common/autotest_common.sh@10 -- # set +x 00:06:32.433 ************************************ 00:06:32.433 END TEST accel_copy_crc32c_C2 00:06:32.433 ************************************ 00:06:32.433 06:31:43 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:32.433 06:31:43 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:32.433 06:31:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.433 06:31:43 -- common/autotest_common.sh@10 -- # set +x 00:06:32.433 ************************************ 00:06:32.433 START TEST accel_dualcast 00:06:32.433 ************************************ 00:06:32.433 06:31:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:32.433 06:31:43 -- accel/accel.sh@16 -- # local accel_opc 00:06:32.433 06:31:43 -- accel/accel.sh@17 -- # local accel_module 00:06:32.433 06:31:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:32.433 06:31:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:32.433 06:31:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.433 06:31:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.433 06:31:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.433 06:31:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.433 06:31:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.433 06:31:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.433 06:31:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.433 06:31:43 -- accel/accel.sh@42 -- # jq -r . 00:06:32.433 [2024-11-28 06:31:43.197120] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:32.433 [2024-11-28 06:31:43.197338] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70563 ] 00:06:32.693 [2024-11-28 06:31:43.331008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.693 [2024-11-28 06:31:43.369134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.069 06:31:44 -- accel/accel.sh@18 -- # out=' 00:06:34.069 SPDK Configuration: 00:06:34.069 Core mask: 0x1 00:06:34.069 00:06:34.069 Accel Perf Configuration: 00:06:34.069 Workload Type: dualcast 00:06:34.069 Transfer size: 4096 bytes 00:06:34.069 Vector count 1 00:06:34.069 Module: software 00:06:34.069 Queue depth: 32 00:06:34.069 Allocate depth: 32 00:06:34.069 # threads/core: 1 00:06:34.069 Run time: 1 seconds 00:06:34.069 Verify: Yes 00:06:34.069 00:06:34.069 Running for 1 seconds... 00:06:34.069 00:06:34.069 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:34.069 ------------------------------------------------------------------------------------ 00:06:34.069 0,0 459392/s 1794 MiB/s 0 0 00:06:34.069 ==================================================================================== 00:06:34.069 Total 459392/s 1794 MiB/s 0 0' 00:06:34.069 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:06:34.069 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:06:34.069 06:31:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:34.069 06:31:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:34.069 06:31:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.069 06:31:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.069 06:31:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.069 06:31:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.069 06:31:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.069 06:31:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.069 06:31:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.069 06:31:44 -- accel/accel.sh@42 -- # jq -r . 00:06:34.069 [2024-11-28 06:31:44.565422] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:34.069 [2024-11-28 06:31:44.565520] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70578 ] 00:06:34.069 [2024-11-28 06:31:44.698980] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.069 [2024-11-28 06:31:44.737203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.069 06:31:44 -- accel/accel.sh@21 -- # val= 00:06:34.069 06:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.069 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:06:34.069 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:06:34.069 06:31:44 -- accel/accel.sh@21 -- # val= 00:06:34.069 06:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.069 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:06:34.069 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:06:34.069 06:31:44 -- accel/accel.sh@21 -- # val=0x1 00:06:34.069 06:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.069 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:06:34.069 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:06:34.069 06:31:44 -- accel/accel.sh@21 -- # val= 00:06:34.069 06:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.069 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:06:34.069 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:06:34.069 06:31:44 -- accel/accel.sh@21 -- # val= 00:06:34.069 06:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.069 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:06:34.069 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:06:34.069 06:31:44 -- accel/accel.sh@21 -- # val=dualcast 00:06:34.069 06:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.069 06:31:44 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:34.069 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:06:34.069 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:06:34.069 06:31:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:34.069 06:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.069 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:06:34.069 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:06:34.069 06:31:44 -- accel/accel.sh@21 -- # val= 00:06:34.069 06:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.069 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:06:34.069 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:06:34.069 06:31:44 -- accel/accel.sh@21 -- # val=software 00:06:34.069 06:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.069 06:31:44 -- accel/accel.sh@23 -- # accel_module=software 00:06:34.069 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:06:34.069 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:06:34.069 06:31:44 -- accel/accel.sh@21 -- # val=32 00:06:34.069 06:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.069 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:06:34.069 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:06:34.069 06:31:44 -- accel/accel.sh@21 -- # val=32 00:06:34.070 06:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.070 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:06:34.070 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:06:34.070 06:31:44 -- accel/accel.sh@21 -- # val=1 00:06:34.070 06:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.070 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:06:34.070 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:06:34.070 06:31:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:34.070 06:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.070 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:06:34.070 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:06:34.070 06:31:44 -- accel/accel.sh@21 -- # val=Yes 00:06:34.070 06:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.070 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:06:34.070 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:06:34.070 06:31:44 -- accel/accel.sh@21 -- # val= 00:06:34.070 06:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.070 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:06:34.070 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:06:34.070 06:31:44 -- accel/accel.sh@21 -- # val= 00:06:34.070 06:31:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.070 06:31:44 -- accel/accel.sh@20 -- # IFS=: 00:06:34.070 06:31:44 -- accel/accel.sh@20 -- # read -r var val 00:06:35.441 06:31:45 -- accel/accel.sh@21 -- # val= 00:06:35.441 06:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.441 06:31:45 -- accel/accel.sh@20 -- # IFS=: 00:06:35.441 06:31:45 -- accel/accel.sh@20 -- # read -r var val 00:06:35.441 06:31:45 -- accel/accel.sh@21 -- # val= 00:06:35.441 06:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.441 06:31:45 -- accel/accel.sh@20 -- # IFS=: 00:06:35.441 06:31:45 -- accel/accel.sh@20 -- # read -r var val 00:06:35.441 06:31:45 -- accel/accel.sh@21 -- # val= 00:06:35.441 06:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.441 06:31:45 -- accel/accel.sh@20 -- # IFS=: 00:06:35.441 06:31:45 -- accel/accel.sh@20 -- # read -r var val 00:06:35.441 06:31:45 -- accel/accel.sh@21 -- # val= 00:06:35.441 06:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.441 06:31:45 -- accel/accel.sh@20 -- # IFS=: 00:06:35.441 06:31:45 -- accel/accel.sh@20 -- # read -r var val 00:06:35.441 06:31:45 -- accel/accel.sh@21 -- # val= 00:06:35.441 06:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.441 06:31:45 -- accel/accel.sh@20 -- # IFS=: 00:06:35.441 06:31:45 -- accel/accel.sh@20 -- # read -r var val 00:06:35.441 06:31:45 -- accel/accel.sh@21 -- # val= 00:06:35.441 06:31:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.441 06:31:45 -- accel/accel.sh@20 -- # IFS=: 00:06:35.442 06:31:45 -- accel/accel.sh@20 -- # read -r var val 00:06:35.442 ************************************ 00:06:35.442 END TEST accel_dualcast 00:06:35.442 ************************************ 00:06:35.442 06:31:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:35.442 06:31:45 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:35.442 06:31:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.442 00:06:35.442 real 0m2.738s 00:06:35.442 user 0m2.324s 00:06:35.442 sys 0m0.215s 00:06:35.442 06:31:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:35.442 06:31:45 -- common/autotest_common.sh@10 -- # set +x 00:06:35.442 06:31:45 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:35.442 06:31:45 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:35.442 06:31:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.442 06:31:45 -- common/autotest_common.sh@10 -- # set +x 00:06:35.442 ************************************ 00:06:35.442 START TEST accel_compare 00:06:35.442 ************************************ 00:06:35.442 06:31:45 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:35.442 06:31:45 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.442 06:31:45 -- accel/accel.sh@17 -- # local accel_module 00:06:35.442 06:31:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:35.442 06:31:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:35.442 06:31:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.442 06:31:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.442 06:31:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.442 06:31:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.442 06:31:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.442 06:31:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.442 06:31:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.442 06:31:45 -- accel/accel.sh@42 -- # jq -r . 00:06:35.442 [2024-11-28 06:31:45.970073] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:35.442 [2024-11-28 06:31:45.970178] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70613 ] 00:06:35.442 [2024-11-28 06:31:46.099349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.442 [2024-11-28 06:31:46.137923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.817 06:31:47 -- accel/accel.sh@18 -- # out=' 00:06:36.817 SPDK Configuration: 00:06:36.817 Core mask: 0x1 00:06:36.817 00:06:36.817 Accel Perf Configuration: 00:06:36.817 Workload Type: compare 00:06:36.817 Transfer size: 4096 bytes 00:06:36.817 Vector count 1 00:06:36.817 Module: software 00:06:36.817 Queue depth: 32 00:06:36.817 Allocate depth: 32 00:06:36.817 # threads/core: 1 00:06:36.817 Run time: 1 seconds 00:06:36.817 Verify: Yes 00:06:36.817 00:06:36.817 Running for 1 seconds... 00:06:36.817 00:06:36.817 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:36.817 ------------------------------------------------------------------------------------ 00:06:36.817 0,0 578048/s 2258 MiB/s 0 0 00:06:36.817 ==================================================================================== 00:06:36.817 Total 578048/s 2258 MiB/s 0 0' 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:06:36.817 06:31:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:36.817 06:31:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:36.817 06:31:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.817 06:31:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.817 06:31:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.817 06:31:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.817 06:31:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.817 06:31:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.817 06:31:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.817 06:31:47 -- accel/accel.sh@42 -- # jq -r . 00:06:36.817 [2024-11-28 06:31:47.335586] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:36.817 [2024-11-28 06:31:47.335812] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70634 ] 00:06:36.817 [2024-11-28 06:31:47.462408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.817 [2024-11-28 06:31:47.500862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.817 06:31:47 -- accel/accel.sh@21 -- # val= 00:06:36.817 06:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:06:36.817 06:31:47 -- accel/accel.sh@21 -- # val= 00:06:36.817 06:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:06:36.817 06:31:47 -- accel/accel.sh@21 -- # val=0x1 00:06:36.817 06:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:06:36.817 06:31:47 -- accel/accel.sh@21 -- # val= 00:06:36.817 06:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:06:36.817 06:31:47 -- accel/accel.sh@21 -- # val= 00:06:36.817 06:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:06:36.817 06:31:47 -- accel/accel.sh@21 -- # val=compare 00:06:36.817 06:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.817 06:31:47 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:06:36.817 06:31:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:36.817 06:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:06:36.817 06:31:47 -- accel/accel.sh@21 -- # val= 00:06:36.817 06:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:06:36.817 06:31:47 -- accel/accel.sh@21 -- # val=software 00:06:36.817 06:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.817 06:31:47 -- accel/accel.sh@23 -- # accel_module=software 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:06:36.817 06:31:47 -- accel/accel.sh@21 -- # val=32 00:06:36.817 06:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:06:36.817 06:31:47 -- accel/accel.sh@21 -- # val=32 00:06:36.817 06:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:06:36.817 06:31:47 -- accel/accel.sh@21 -- # val=1 00:06:36.817 06:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:06:36.817 06:31:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:36.817 06:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:06:36.817 06:31:47 -- accel/accel.sh@21 -- # val=Yes 00:06:36.817 06:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:06:36.817 06:31:47 -- accel/accel.sh@21 -- # val= 00:06:36.817 06:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:06:36.817 06:31:47 -- accel/accel.sh@21 -- # val= 00:06:36.817 06:31:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # IFS=: 00:06:36.817 06:31:47 -- accel/accel.sh@20 -- # read -r var val 00:06:38.191 06:31:48 -- accel/accel.sh@21 -- # val= 00:06:38.191 06:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.191 06:31:48 -- accel/accel.sh@20 -- # IFS=: 00:06:38.191 06:31:48 -- accel/accel.sh@20 -- # read -r var val 00:06:38.191 06:31:48 -- accel/accel.sh@21 -- # val= 00:06:38.191 06:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.191 06:31:48 -- accel/accel.sh@20 -- # IFS=: 00:06:38.191 06:31:48 -- accel/accel.sh@20 -- # read -r var val 00:06:38.191 06:31:48 -- accel/accel.sh@21 -- # val= 00:06:38.191 06:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.191 06:31:48 -- accel/accel.sh@20 -- # IFS=: 00:06:38.191 06:31:48 -- accel/accel.sh@20 -- # read -r var val 00:06:38.191 06:31:48 -- accel/accel.sh@21 -- # val= 00:06:38.191 06:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.191 06:31:48 -- accel/accel.sh@20 -- # IFS=: 00:06:38.191 06:31:48 -- accel/accel.sh@20 -- # read -r var val 00:06:38.191 06:31:48 -- accel/accel.sh@21 -- # val= 00:06:38.191 06:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.191 06:31:48 -- accel/accel.sh@20 -- # IFS=: 00:06:38.191 06:31:48 -- accel/accel.sh@20 -- # read -r var val 00:06:38.191 06:31:48 -- accel/accel.sh@21 -- # val= 00:06:38.191 06:31:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.191 06:31:48 -- accel/accel.sh@20 -- # IFS=: 00:06:38.191 06:31:48 -- accel/accel.sh@20 -- # read -r var val 00:06:38.191 06:31:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:38.191 06:31:48 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:38.191 ************************************ 00:06:38.191 END TEST accel_compare 00:06:38.191 ************************************ 00:06:38.191 06:31:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.191 00:06:38.191 real 0m2.725s 00:06:38.191 user 0m2.314s 00:06:38.191 sys 0m0.209s 00:06:38.191 06:31:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:38.191 06:31:48 -- common/autotest_common.sh@10 -- # set +x 00:06:38.191 06:31:48 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:38.192 06:31:48 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:38.192 06:31:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.192 06:31:48 -- common/autotest_common.sh@10 -- # set +x 00:06:38.192 ************************************ 00:06:38.192 START TEST accel_xor 00:06:38.192 ************************************ 00:06:38.192 06:31:48 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:38.192 06:31:48 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.192 06:31:48 -- accel/accel.sh@17 -- # local accel_module 00:06:38.192 06:31:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:38.192 06:31:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.192 06:31:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:38.192 06:31:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.192 06:31:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.192 06:31:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.192 06:31:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.192 06:31:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.192 06:31:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.192 06:31:48 -- accel/accel.sh@42 -- # jq -r . 00:06:38.192 [2024-11-28 06:31:48.736994] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:38.192 [2024-11-28 06:31:48.737196] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70664 ] 00:06:38.192 [2024-11-28 06:31:48.871346] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.192 [2024-11-28 06:31:48.909812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.568 06:31:50 -- accel/accel.sh@18 -- # out=' 00:06:39.568 SPDK Configuration: 00:06:39.568 Core mask: 0x1 00:06:39.568 00:06:39.568 Accel Perf Configuration: 00:06:39.568 Workload Type: xor 00:06:39.568 Source buffers: 2 00:06:39.568 Transfer size: 4096 bytes 00:06:39.568 Vector count 1 00:06:39.568 Module: software 00:06:39.568 Queue depth: 32 00:06:39.568 Allocate depth: 32 00:06:39.568 # threads/core: 1 00:06:39.568 Run time: 1 seconds 00:06:39.568 Verify: Yes 00:06:39.568 00:06:39.568 Running for 1 seconds... 00:06:39.568 00:06:39.568 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:39.568 ------------------------------------------------------------------------------------ 00:06:39.568 0,0 437504/s 1709 MiB/s 0 0 00:06:39.568 ==================================================================================== 00:06:39.568 Total 437504/s 1709 MiB/s 0 0' 00:06:39.568 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:39.568 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:39.568 06:31:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:39.568 06:31:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:39.568 06:31:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.568 06:31:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.568 06:31:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.568 06:31:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.568 06:31:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.568 06:31:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.568 06:31:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.568 06:31:50 -- accel/accel.sh@42 -- # jq -r . 00:06:39.568 [2024-11-28 06:31:50.103620] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:39.568 [2024-11-28 06:31:50.104087] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70690 ] 00:06:39.568 [2024-11-28 06:31:50.235431] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.568 [2024-11-28 06:31:50.273481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.568 06:31:50 -- accel/accel.sh@21 -- # val= 00:06:39.568 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.568 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:39.568 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:39.568 06:31:50 -- accel/accel.sh@21 -- # val= 00:06:39.568 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.568 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:39.568 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:39.568 06:31:50 -- accel/accel.sh@21 -- # val=0x1 00:06:39.568 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.568 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:39.568 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:39.568 06:31:50 -- accel/accel.sh@21 -- # val= 00:06:39.568 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.568 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:39.568 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:39.568 06:31:50 -- accel/accel.sh@21 -- # val= 00:06:39.568 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.568 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:39.568 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:39.568 06:31:50 -- accel/accel.sh@21 -- # val=xor 00:06:39.568 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.568 06:31:50 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:39.568 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:39.568 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:39.568 06:31:50 -- accel/accel.sh@21 -- # val=2 00:06:39.568 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.568 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:39.568 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:39.568 06:31:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:39.568 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.568 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:39.569 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:39.569 06:31:50 -- accel/accel.sh@21 -- # val= 00:06:39.569 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.569 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:39.569 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:39.569 06:31:50 -- accel/accel.sh@21 -- # val=software 00:06:39.569 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.569 06:31:50 -- accel/accel.sh@23 -- # accel_module=software 00:06:39.569 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:39.569 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:39.569 06:31:50 -- accel/accel.sh@21 -- # val=32 00:06:39.569 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.569 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:39.569 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:39.569 06:31:50 -- accel/accel.sh@21 -- # val=32 00:06:39.569 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.569 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:39.569 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:39.569 06:31:50 -- accel/accel.sh@21 -- # val=1 00:06:39.569 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.569 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:39.569 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:39.569 06:31:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:39.569 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.569 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:39.569 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:39.569 06:31:50 -- accel/accel.sh@21 -- # val=Yes 00:06:39.569 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.569 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:39.569 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:39.569 06:31:50 -- accel/accel.sh@21 -- # val= 00:06:39.569 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.569 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:39.569 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:39.569 06:31:50 -- accel/accel.sh@21 -- # val= 00:06:39.569 06:31:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.569 06:31:50 -- accel/accel.sh@20 -- # IFS=: 00:06:39.569 06:31:50 -- accel/accel.sh@20 -- # read -r var val 00:06:40.942 06:31:51 -- accel/accel.sh@21 -- # val= 00:06:40.942 06:31:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.942 06:31:51 -- accel/accel.sh@20 -- # IFS=: 00:06:40.942 06:31:51 -- accel/accel.sh@20 -- # read -r var val 00:06:40.942 06:31:51 -- accel/accel.sh@21 -- # val= 00:06:40.942 06:31:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.942 06:31:51 -- accel/accel.sh@20 -- # IFS=: 00:06:40.942 06:31:51 -- accel/accel.sh@20 -- # read -r var val 00:06:40.942 06:31:51 -- accel/accel.sh@21 -- # val= 00:06:40.942 06:31:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.942 06:31:51 -- accel/accel.sh@20 -- # IFS=: 00:06:40.942 06:31:51 -- accel/accel.sh@20 -- # read -r var val 00:06:40.942 06:31:51 -- accel/accel.sh@21 -- # val= 00:06:40.942 06:31:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.942 06:31:51 -- accel/accel.sh@20 -- # IFS=: 00:06:40.942 06:31:51 -- accel/accel.sh@20 -- # read -r var val 00:06:40.942 06:31:51 -- accel/accel.sh@21 -- # val= 00:06:40.942 06:31:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.942 06:31:51 -- accel/accel.sh@20 -- # IFS=: 00:06:40.942 06:31:51 -- accel/accel.sh@20 -- # read -r var val 00:06:40.942 06:31:51 -- accel/accel.sh@21 -- # val= 00:06:40.942 06:31:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.942 06:31:51 -- accel/accel.sh@20 -- # IFS=: 00:06:40.942 06:31:51 -- accel/accel.sh@20 -- # read -r var val 00:06:40.942 06:31:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:40.942 06:31:51 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:40.942 06:31:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.942 00:06:40.942 real 0m2.733s 00:06:40.942 user 0m2.313s 00:06:40.942 sys 0m0.216s 00:06:40.942 06:31:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:40.943 06:31:51 -- common/autotest_common.sh@10 -- # set +x 00:06:40.943 ************************************ 00:06:40.943 END TEST accel_xor 00:06:40.943 ************************************ 00:06:40.943 06:31:51 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:40.943 06:31:51 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:40.943 06:31:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.943 06:31:51 -- common/autotest_common.sh@10 -- # set +x 00:06:40.943 ************************************ 00:06:40.943 START TEST accel_xor 00:06:40.943 ************************************ 00:06:40.943 06:31:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:06:40.943 06:31:51 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.943 06:31:51 -- accel/accel.sh@17 -- # local accel_module 00:06:40.943 06:31:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:40.943 06:31:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:40.943 06:31:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.943 06:31:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.943 06:31:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.943 06:31:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.943 06:31:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.943 06:31:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.943 06:31:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.943 06:31:51 -- accel/accel.sh@42 -- # jq -r . 00:06:40.943 [2024-11-28 06:31:51.507760] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:40.943 [2024-11-28 06:31:51.507958] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70720 ] 00:06:40.943 [2024-11-28 06:31:51.640887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.943 [2024-11-28 06:31:51.678965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.317 06:31:52 -- accel/accel.sh@18 -- # out=' 00:06:42.317 SPDK Configuration: 00:06:42.317 Core mask: 0x1 00:06:42.317 00:06:42.317 Accel Perf Configuration: 00:06:42.317 Workload Type: xor 00:06:42.317 Source buffers: 3 00:06:42.317 Transfer size: 4096 bytes 00:06:42.317 Vector count 1 00:06:42.317 Module: software 00:06:42.317 Queue depth: 32 00:06:42.317 Allocate depth: 32 00:06:42.317 # threads/core: 1 00:06:42.317 Run time: 1 seconds 00:06:42.317 Verify: Yes 00:06:42.317 00:06:42.317 Running for 1 seconds... 00:06:42.317 00:06:42.317 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:42.317 ------------------------------------------------------------------------------------ 00:06:42.317 0,0 417280/s 1630 MiB/s 0 0 00:06:42.317 ==================================================================================== 00:06:42.317 Total 417280/s 1630 MiB/s 0 0' 00:06:42.317 06:31:52 -- accel/accel.sh@20 -- # IFS=: 00:06:42.317 06:31:52 -- accel/accel.sh@20 -- # read -r var val 00:06:42.317 06:31:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:42.317 06:31:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:42.317 06:31:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.317 06:31:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.317 06:31:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.317 06:31:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.317 06:31:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.317 06:31:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.317 06:31:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.317 06:31:52 -- accel/accel.sh@42 -- # jq -r . 00:06:42.317 [2024-11-28 06:31:52.874410] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:42.318 [2024-11-28 06:31:52.874636] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70735 ] 00:06:42.318 [2024-11-28 06:31:53.010545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.318 [2024-11-28 06:31:53.048247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.577 06:31:53 -- accel/accel.sh@21 -- # val= 00:06:42.577 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:42.577 06:31:53 -- accel/accel.sh@21 -- # val= 00:06:42.577 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:42.577 06:31:53 -- accel/accel.sh@21 -- # val=0x1 00:06:42.577 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:42.577 06:31:53 -- accel/accel.sh@21 -- # val= 00:06:42.577 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:42.577 06:31:53 -- accel/accel.sh@21 -- # val= 00:06:42.577 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:42.577 06:31:53 -- accel/accel.sh@21 -- # val=xor 00:06:42.577 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.577 06:31:53 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:42.577 06:31:53 -- accel/accel.sh@21 -- # val=3 00:06:42.577 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:42.577 06:31:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:42.577 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:42.577 06:31:53 -- accel/accel.sh@21 -- # val= 00:06:42.577 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:42.577 06:31:53 -- accel/accel.sh@21 -- # val=software 00:06:42.577 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.577 06:31:53 -- accel/accel.sh@23 -- # accel_module=software 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:42.577 06:31:53 -- accel/accel.sh@21 -- # val=32 00:06:42.577 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:42.577 06:31:53 -- accel/accel.sh@21 -- # val=32 00:06:42.577 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:42.577 06:31:53 -- accel/accel.sh@21 -- # val=1 00:06:42.577 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:42.577 06:31:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:42.577 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:42.577 06:31:53 -- accel/accel.sh@21 -- # val=Yes 00:06:42.577 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:42.577 06:31:53 -- accel/accel.sh@21 -- # val= 00:06:42.577 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:42.577 06:31:53 -- accel/accel.sh@21 -- # val= 00:06:42.577 06:31:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # IFS=: 00:06:42.577 06:31:53 -- accel/accel.sh@20 -- # read -r var val 00:06:43.529 06:31:54 -- accel/accel.sh@21 -- # val= 00:06:43.529 06:31:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.529 06:31:54 -- accel/accel.sh@20 -- # IFS=: 00:06:43.529 06:31:54 -- accel/accel.sh@20 -- # read -r var val 00:06:43.529 06:31:54 -- accel/accel.sh@21 -- # val= 00:06:43.529 06:31:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.529 06:31:54 -- accel/accel.sh@20 -- # IFS=: 00:06:43.529 06:31:54 -- accel/accel.sh@20 -- # read -r var val 00:06:43.529 06:31:54 -- accel/accel.sh@21 -- # val= 00:06:43.529 06:31:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.529 06:31:54 -- accel/accel.sh@20 -- # IFS=: 00:06:43.529 06:31:54 -- accel/accel.sh@20 -- # read -r var val 00:06:43.529 06:31:54 -- accel/accel.sh@21 -- # val= 00:06:43.529 06:31:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.529 06:31:54 -- accel/accel.sh@20 -- # IFS=: 00:06:43.529 06:31:54 -- accel/accel.sh@20 -- # read -r var val 00:06:43.529 06:31:54 -- accel/accel.sh@21 -- # val= 00:06:43.529 06:31:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.529 06:31:54 -- accel/accel.sh@20 -- # IFS=: 00:06:43.529 06:31:54 -- accel/accel.sh@20 -- # read -r var val 00:06:43.529 06:31:54 -- accel/accel.sh@21 -- # val= 00:06:43.529 06:31:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.529 06:31:54 -- accel/accel.sh@20 -- # IFS=: 00:06:43.529 ************************************ 00:06:43.529 END TEST accel_xor 00:06:43.529 ************************************ 00:06:43.529 06:31:54 -- accel/accel.sh@20 -- # read -r var val 00:06:43.529 06:31:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:43.529 06:31:54 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:43.530 06:31:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.530 00:06:43.530 real 0m2.735s 00:06:43.530 user 0m2.324s 00:06:43.530 sys 0m0.207s 00:06:43.530 06:31:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:43.530 06:31:54 -- common/autotest_common.sh@10 -- # set +x 00:06:43.530 06:31:54 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:43.530 06:31:54 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:43.530 06:31:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.530 06:31:54 -- common/autotest_common.sh@10 -- # set +x 00:06:43.530 ************************************ 00:06:43.530 START TEST accel_dif_verify 00:06:43.530 ************************************ 00:06:43.530 06:31:54 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:43.530 06:31:54 -- accel/accel.sh@16 -- # local accel_opc 00:06:43.530 06:31:54 -- accel/accel.sh@17 -- # local accel_module 00:06:43.530 06:31:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:43.530 06:31:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:43.530 06:31:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.530 06:31:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.530 06:31:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.530 06:31:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.530 06:31:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.530 06:31:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.530 06:31:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.530 06:31:54 -- accel/accel.sh@42 -- # jq -r . 00:06:43.530 [2024-11-28 06:31:54.283384] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:43.530 [2024-11-28 06:31:54.283498] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70776 ] 00:06:43.817 [2024-11-28 06:31:54.417189] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.817 [2024-11-28 06:31:54.455403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.199 06:31:55 -- accel/accel.sh@18 -- # out=' 00:06:45.199 SPDK Configuration: 00:06:45.199 Core mask: 0x1 00:06:45.199 00:06:45.199 Accel Perf Configuration: 00:06:45.199 Workload Type: dif_verify 00:06:45.199 Vector size: 4096 bytes 00:06:45.199 Transfer size: 4096 bytes 00:06:45.199 Block size: 512 bytes 00:06:45.199 Metadata size: 8 bytes 00:06:45.199 Vector count 1 00:06:45.199 Module: software 00:06:45.199 Queue depth: 32 00:06:45.199 Allocate depth: 32 00:06:45.199 # threads/core: 1 00:06:45.199 Run time: 1 seconds 00:06:45.199 Verify: No 00:06:45.199 00:06:45.199 Running for 1 seconds... 00:06:45.199 00:06:45.199 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:45.199 ------------------------------------------------------------------------------------ 00:06:45.199 0,0 128448/s 509 MiB/s 0 0 00:06:45.199 ==================================================================================== 00:06:45.199 Total 128448/s 501 MiB/s 0 0' 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:06:45.199 06:31:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:45.199 06:31:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:45.199 06:31:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.199 06:31:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.199 06:31:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.199 06:31:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.199 06:31:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.199 06:31:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.199 06:31:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.199 06:31:55 -- accel/accel.sh@42 -- # jq -r . 00:06:45.199 [2024-11-28 06:31:55.649223] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:45.199 [2024-11-28 06:31:55.649440] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70791 ] 00:06:45.199 [2024-11-28 06:31:55.784629] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.199 [2024-11-28 06:31:55.822993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.199 06:31:55 -- accel/accel.sh@21 -- # val= 00:06:45.199 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:06:45.199 06:31:55 -- accel/accel.sh@21 -- # val= 00:06:45.199 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:06:45.199 06:31:55 -- accel/accel.sh@21 -- # val=0x1 00:06:45.199 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:06:45.199 06:31:55 -- accel/accel.sh@21 -- # val= 00:06:45.199 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:06:45.199 06:31:55 -- accel/accel.sh@21 -- # val= 00:06:45.199 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:06:45.199 06:31:55 -- accel/accel.sh@21 -- # val=dif_verify 00:06:45.199 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.199 06:31:55 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:06:45.199 06:31:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:45.199 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:06:45.199 06:31:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:45.199 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:06:45.199 06:31:55 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:45.199 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:06:45.199 06:31:55 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:45.199 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:06:45.199 06:31:55 -- accel/accel.sh@21 -- # val= 00:06:45.199 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:06:45.199 06:31:55 -- accel/accel.sh@21 -- # val=software 00:06:45.199 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.199 06:31:55 -- accel/accel.sh@23 -- # accel_module=software 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:06:45.199 06:31:55 -- accel/accel.sh@21 -- # val=32 00:06:45.199 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:06:45.199 06:31:55 -- accel/accel.sh@21 -- # val=32 00:06:45.199 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:06:45.199 06:31:55 -- accel/accel.sh@21 -- # val=1 00:06:45.199 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:06:45.199 06:31:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:45.199 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:06:45.199 06:31:55 -- accel/accel.sh@21 -- # val=No 00:06:45.199 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:06:45.199 06:31:55 -- accel/accel.sh@21 -- # val= 00:06:45.199 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:06:45.199 06:31:55 -- accel/accel.sh@21 -- # val= 00:06:45.199 06:31:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # IFS=: 00:06:45.199 06:31:55 -- accel/accel.sh@20 -- # read -r var val 00:06:46.577 06:31:56 -- accel/accel.sh@21 -- # val= 00:06:46.577 06:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.577 06:31:56 -- accel/accel.sh@20 -- # IFS=: 00:06:46.577 06:31:56 -- accel/accel.sh@20 -- # read -r var val 00:06:46.577 06:31:56 -- accel/accel.sh@21 -- # val= 00:06:46.577 06:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.577 06:31:56 -- accel/accel.sh@20 -- # IFS=: 00:06:46.577 06:31:56 -- accel/accel.sh@20 -- # read -r var val 00:06:46.577 06:31:56 -- accel/accel.sh@21 -- # val= 00:06:46.577 06:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.577 06:31:56 -- accel/accel.sh@20 -- # IFS=: 00:06:46.577 06:31:56 -- accel/accel.sh@20 -- # read -r var val 00:06:46.577 06:31:56 -- accel/accel.sh@21 -- # val= 00:06:46.577 06:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.577 06:31:56 -- accel/accel.sh@20 -- # IFS=: 00:06:46.577 06:31:56 -- accel/accel.sh@20 -- # read -r var val 00:06:46.577 06:31:56 -- accel/accel.sh@21 -- # val= 00:06:46.577 06:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.577 06:31:56 -- accel/accel.sh@20 -- # IFS=: 00:06:46.577 06:31:56 -- accel/accel.sh@20 -- # read -r var val 00:06:46.577 06:31:56 -- accel/accel.sh@21 -- # val= 00:06:46.577 06:31:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.577 06:31:56 -- accel/accel.sh@20 -- # IFS=: 00:06:46.577 06:31:56 -- accel/accel.sh@20 -- # read -r var val 00:06:46.577 06:31:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:46.577 06:31:56 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:46.577 06:31:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.577 00:06:46.577 real 0m2.735s 00:06:46.577 user 0m2.327s 00:06:46.577 sys 0m0.208s 00:06:46.577 06:31:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:46.577 06:31:56 -- common/autotest_common.sh@10 -- # set +x 00:06:46.577 ************************************ 00:06:46.577 END TEST accel_dif_verify 00:06:46.577 ************************************ 00:06:46.577 06:31:57 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:46.577 06:31:57 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:46.577 06:31:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:46.577 06:31:57 -- common/autotest_common.sh@10 -- # set +x 00:06:46.577 ************************************ 00:06:46.577 START TEST accel_dif_generate 00:06:46.577 ************************************ 00:06:46.577 06:31:57 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:06:46.577 06:31:57 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.577 06:31:57 -- accel/accel.sh@17 -- # local accel_module 00:06:46.577 06:31:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:46.577 06:31:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:46.577 06:31:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.577 06:31:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.577 06:31:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.577 06:31:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.577 06:31:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.577 06:31:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.577 06:31:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.577 06:31:57 -- accel/accel.sh@42 -- # jq -r . 00:06:46.577 [2024-11-28 06:31:57.062012] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:46.577 [2024-11-28 06:31:57.062120] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70821 ] 00:06:46.577 [2024-11-28 06:31:57.189201] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.577 [2024-11-28 06:31:57.229413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.962 06:31:58 -- accel/accel.sh@18 -- # out=' 00:06:47.962 SPDK Configuration: 00:06:47.962 Core mask: 0x1 00:06:47.962 00:06:47.962 Accel Perf Configuration: 00:06:47.962 Workload Type: dif_generate 00:06:47.962 Vector size: 4096 bytes 00:06:47.962 Transfer size: 4096 bytes 00:06:47.962 Block size: 512 bytes 00:06:47.962 Metadata size: 8 bytes 00:06:47.962 Vector count 1 00:06:47.962 Module: software 00:06:47.962 Queue depth: 32 00:06:47.962 Allocate depth: 32 00:06:47.962 # threads/core: 1 00:06:47.962 Run time: 1 seconds 00:06:47.962 Verify: No 00:06:47.962 00:06:47.962 Running for 1 seconds... 00:06:47.962 00:06:47.963 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:47.963 ------------------------------------------------------------------------------------ 00:06:47.963 0,0 118304/s 469 MiB/s 0 0 00:06:47.963 ==================================================================================== 00:06:47.963 Total 118304/s 462 MiB/s 0 0' 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:06:47.963 06:31:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:47.963 06:31:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:47.963 06:31:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.963 06:31:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.963 06:31:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.963 06:31:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.963 06:31:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.963 06:31:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.963 06:31:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.963 06:31:58 -- accel/accel.sh@42 -- # jq -r . 00:06:47.963 [2024-11-28 06:31:58.434377] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:47.963 [2024-11-28 06:31:58.434582] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70847 ] 00:06:47.963 [2024-11-28 06:31:58.566257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.963 [2024-11-28 06:31:58.606026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.963 06:31:58 -- accel/accel.sh@21 -- # val= 00:06:47.963 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:06:47.963 06:31:58 -- accel/accel.sh@21 -- # val= 00:06:47.963 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:06:47.963 06:31:58 -- accel/accel.sh@21 -- # val=0x1 00:06:47.963 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:06:47.963 06:31:58 -- accel/accel.sh@21 -- # val= 00:06:47.963 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:06:47.963 06:31:58 -- accel/accel.sh@21 -- # val= 00:06:47.963 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:06:47.963 06:31:58 -- accel/accel.sh@21 -- # val=dif_generate 00:06:47.963 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.963 06:31:58 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:06:47.963 06:31:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:47.963 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:06:47.963 06:31:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:47.963 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:06:47.963 06:31:58 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:47.963 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:06:47.963 06:31:58 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:47.963 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:06:47.963 06:31:58 -- accel/accel.sh@21 -- # val= 00:06:47.963 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:06:47.963 06:31:58 -- accel/accel.sh@21 -- # val=software 00:06:47.963 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.963 06:31:58 -- accel/accel.sh@23 -- # accel_module=software 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:06:47.963 06:31:58 -- accel/accel.sh@21 -- # val=32 00:06:47.963 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:06:47.963 06:31:58 -- accel/accel.sh@21 -- # val=32 00:06:47.963 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:06:47.963 06:31:58 -- accel/accel.sh@21 -- # val=1 00:06:47.963 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:06:47.963 06:31:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:47.963 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:06:47.963 06:31:58 -- accel/accel.sh@21 -- # val=No 00:06:47.963 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:06:47.963 06:31:58 -- accel/accel.sh@21 -- # val= 00:06:47.963 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:06:47.963 06:31:58 -- accel/accel.sh@21 -- # val= 00:06:47.963 06:31:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # IFS=: 00:06:47.963 06:31:58 -- accel/accel.sh@20 -- # read -r var val 00:06:49.353 06:31:59 -- accel/accel.sh@21 -- # val= 00:06:49.353 06:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.353 06:31:59 -- accel/accel.sh@20 -- # IFS=: 00:06:49.353 06:31:59 -- accel/accel.sh@20 -- # read -r var val 00:06:49.353 06:31:59 -- accel/accel.sh@21 -- # val= 00:06:49.353 06:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.353 06:31:59 -- accel/accel.sh@20 -- # IFS=: 00:06:49.353 06:31:59 -- accel/accel.sh@20 -- # read -r var val 00:06:49.353 06:31:59 -- accel/accel.sh@21 -- # val= 00:06:49.353 06:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.353 06:31:59 -- accel/accel.sh@20 -- # IFS=: 00:06:49.353 06:31:59 -- accel/accel.sh@20 -- # read -r var val 00:06:49.353 06:31:59 -- accel/accel.sh@21 -- # val= 00:06:49.353 06:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.353 06:31:59 -- accel/accel.sh@20 -- # IFS=: 00:06:49.353 06:31:59 -- accel/accel.sh@20 -- # read -r var val 00:06:49.353 06:31:59 -- accel/accel.sh@21 -- # val= 00:06:49.353 06:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.353 06:31:59 -- accel/accel.sh@20 -- # IFS=: 00:06:49.353 06:31:59 -- accel/accel.sh@20 -- # read -r var val 00:06:49.353 06:31:59 -- accel/accel.sh@21 -- # val= 00:06:49.353 06:31:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.353 06:31:59 -- accel/accel.sh@20 -- # IFS=: 00:06:49.353 06:31:59 -- accel/accel.sh@20 -- # read -r var val 00:06:49.353 06:31:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:49.353 06:31:59 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:49.353 06:31:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.353 00:06:49.353 real 0m2.751s 00:06:49.353 user 0m2.334s 00:06:49.353 sys 0m0.215s 00:06:49.353 06:31:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:49.353 06:31:59 -- common/autotest_common.sh@10 -- # set +x 00:06:49.353 ************************************ 00:06:49.353 END TEST accel_dif_generate 00:06:49.353 ************************************ 00:06:49.353 06:31:59 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:49.353 06:31:59 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:49.353 06:31:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.353 06:31:59 -- common/autotest_common.sh@10 -- # set +x 00:06:49.353 ************************************ 00:06:49.353 START TEST accel_dif_generate_copy 00:06:49.353 ************************************ 00:06:49.353 06:31:59 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:06:49.353 06:31:59 -- accel/accel.sh@16 -- # local accel_opc 00:06:49.353 06:31:59 -- accel/accel.sh@17 -- # local accel_module 00:06:49.353 06:31:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:49.353 06:31:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:49.353 06:31:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.353 06:31:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.353 06:31:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.353 06:31:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.353 06:31:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.353 06:31:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.353 06:31:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.353 06:31:59 -- accel/accel.sh@42 -- # jq -r . 00:06:49.353 [2024-11-28 06:31:59.848965] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:49.353 [2024-11-28 06:31:59.849080] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70879 ] 00:06:49.353 [2024-11-28 06:31:59.985133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.353 [2024-11-28 06:32:00.023552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.740 06:32:01 -- accel/accel.sh@18 -- # out=' 00:06:50.740 SPDK Configuration: 00:06:50.740 Core mask: 0x1 00:06:50.740 00:06:50.740 Accel Perf Configuration: 00:06:50.740 Workload Type: dif_generate_copy 00:06:50.740 Vector size: 4096 bytes 00:06:50.740 Transfer size: 4096 bytes 00:06:50.740 Vector count 1 00:06:50.740 Module: software 00:06:50.740 Queue depth: 32 00:06:50.740 Allocate depth: 32 00:06:50.740 # threads/core: 1 00:06:50.740 Run time: 1 seconds 00:06:50.740 Verify: No 00:06:50.740 00:06:50.740 Running for 1 seconds... 00:06:50.740 00:06:50.740 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:50.740 ------------------------------------------------------------------------------------ 00:06:50.740 0,0 90112/s 357 MiB/s 0 0 00:06:50.740 ==================================================================================== 00:06:50.740 Total 90112/s 352 MiB/s 0 0' 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:06:50.740 06:32:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:50.740 06:32:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:50.740 06:32:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.740 06:32:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.740 06:32:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.740 06:32:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.740 06:32:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.740 06:32:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.740 06:32:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.740 06:32:01 -- accel/accel.sh@42 -- # jq -r . 00:06:50.740 [2024-11-28 06:32:01.230005] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:50.740 [2024-11-28 06:32:01.230130] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70894 ] 00:06:50.740 [2024-11-28 06:32:01.365596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.740 [2024-11-28 06:32:01.405236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.740 06:32:01 -- accel/accel.sh@21 -- # val= 00:06:50.740 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:06:50.740 06:32:01 -- accel/accel.sh@21 -- # val= 00:06:50.740 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:06:50.740 06:32:01 -- accel/accel.sh@21 -- # val=0x1 00:06:50.740 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:06:50.740 06:32:01 -- accel/accel.sh@21 -- # val= 00:06:50.740 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:06:50.740 06:32:01 -- accel/accel.sh@21 -- # val= 00:06:50.740 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:06:50.740 06:32:01 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:50.740 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.740 06:32:01 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:06:50.740 06:32:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:50.740 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:06:50.740 06:32:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:50.740 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:06:50.740 06:32:01 -- accel/accel.sh@21 -- # val= 00:06:50.740 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:06:50.740 06:32:01 -- accel/accel.sh@21 -- # val=software 00:06:50.740 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.740 06:32:01 -- accel/accel.sh@23 -- # accel_module=software 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:06:50.740 06:32:01 -- accel/accel.sh@21 -- # val=32 00:06:50.740 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:06:50.740 06:32:01 -- accel/accel.sh@21 -- # val=32 00:06:50.740 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:06:50.740 06:32:01 -- accel/accel.sh@21 -- # val=1 00:06:50.740 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:06:50.740 06:32:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:50.740 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:06:50.740 06:32:01 -- accel/accel.sh@21 -- # val=No 00:06:50.740 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:06:50.740 06:32:01 -- accel/accel.sh@21 -- # val= 00:06:50.740 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:06:50.740 06:32:01 -- accel/accel.sh@21 -- # val= 00:06:50.740 06:32:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.740 06:32:01 -- accel/accel.sh@20 -- # IFS=: 00:06:50.741 06:32:01 -- accel/accel.sh@20 -- # read -r var val 00:06:52.120 06:32:02 -- accel/accel.sh@21 -- # val= 00:06:52.120 06:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.120 06:32:02 -- accel/accel.sh@20 -- # IFS=: 00:06:52.120 06:32:02 -- accel/accel.sh@20 -- # read -r var val 00:06:52.120 06:32:02 -- accel/accel.sh@21 -- # val= 00:06:52.120 06:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.120 06:32:02 -- accel/accel.sh@20 -- # IFS=: 00:06:52.120 06:32:02 -- accel/accel.sh@20 -- # read -r var val 00:06:52.120 06:32:02 -- accel/accel.sh@21 -- # val= 00:06:52.120 06:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.120 06:32:02 -- accel/accel.sh@20 -- # IFS=: 00:06:52.120 06:32:02 -- accel/accel.sh@20 -- # read -r var val 00:06:52.120 06:32:02 -- accel/accel.sh@21 -- # val= 00:06:52.120 06:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.120 06:32:02 -- accel/accel.sh@20 -- # IFS=: 00:06:52.120 06:32:02 -- accel/accel.sh@20 -- # read -r var val 00:06:52.120 06:32:02 -- accel/accel.sh@21 -- # val= 00:06:52.120 06:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.120 06:32:02 -- accel/accel.sh@20 -- # IFS=: 00:06:52.120 06:32:02 -- accel/accel.sh@20 -- # read -r var val 00:06:52.120 06:32:02 -- accel/accel.sh@21 -- # val= 00:06:52.120 06:32:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.120 06:32:02 -- accel/accel.sh@20 -- # IFS=: 00:06:52.120 06:32:02 -- accel/accel.sh@20 -- # read -r var val 00:06:52.120 06:32:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:52.120 06:32:02 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:52.120 06:32:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.120 00:06:52.120 real 0m2.746s 00:06:52.120 user 0m2.331s 00:06:52.120 sys 0m0.214s 00:06:52.120 ************************************ 00:06:52.120 END TEST accel_dif_generate_copy 00:06:52.120 06:32:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:52.120 06:32:02 -- common/autotest_common.sh@10 -- # set +x 00:06:52.120 ************************************ 00:06:52.120 06:32:02 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:52.120 06:32:02 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:52.120 06:32:02 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:52.120 06:32:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.120 06:32:02 -- common/autotest_common.sh@10 -- # set +x 00:06:52.120 ************************************ 00:06:52.120 START TEST accel_comp 00:06:52.120 ************************************ 00:06:52.120 06:32:02 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:52.120 06:32:02 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.120 06:32:02 -- accel/accel.sh@17 -- # local accel_module 00:06:52.120 06:32:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:52.120 06:32:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:52.120 06:32:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.120 06:32:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.120 06:32:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.120 06:32:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.120 06:32:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.120 06:32:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.120 06:32:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.120 06:32:02 -- accel/accel.sh@42 -- # jq -r . 00:06:52.120 [2024-11-28 06:32:02.633808] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:52.120 [2024-11-28 06:32:02.633911] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70935 ] 00:06:52.120 [2024-11-28 06:32:02.767191] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.120 [2024-11-28 06:32:02.804427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.495 06:32:03 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:53.495 00:06:53.495 SPDK Configuration: 00:06:53.495 Core mask: 0x1 00:06:53.495 00:06:53.495 Accel Perf Configuration: 00:06:53.495 Workload Type: compress 00:06:53.495 Transfer size: 4096 bytes 00:06:53.495 Vector count 1 00:06:53.495 Module: software 00:06:53.495 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:53.495 Queue depth: 32 00:06:53.495 Allocate depth: 32 00:06:53.495 # threads/core: 1 00:06:53.495 Run time: 1 seconds 00:06:53.495 Verify: No 00:06:53.495 00:06:53.495 Running for 1 seconds... 00:06:53.495 00:06:53.495 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:53.495 ------------------------------------------------------------------------------------ 00:06:53.495 0,0 64032/s 266 MiB/s 0 0 00:06:53.495 ==================================================================================== 00:06:53.495 Total 64032/s 250 MiB/s 0 0' 00:06:53.495 06:32:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:53.495 06:32:03 -- accel/accel.sh@20 -- # IFS=: 00:06:53.495 06:32:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:53.495 06:32:03 -- accel/accel.sh@20 -- # read -r var val 00:06:53.495 06:32:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.495 06:32:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.495 06:32:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.495 06:32:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.495 06:32:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.495 06:32:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.495 06:32:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.495 06:32:03 -- accel/accel.sh@42 -- # jq -r . 00:06:53.495 [2024-11-28 06:32:03.989898] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:53.495 [2024-11-28 06:32:03.990006] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70950 ] 00:06:53.495 [2024-11-28 06:32:04.122972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.495 [2024-11-28 06:32:04.159691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.495 06:32:04 -- accel/accel.sh@21 -- # val= 00:06:53.495 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.495 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.495 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.495 06:32:04 -- accel/accel.sh@21 -- # val= 00:06:53.495 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.495 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.495 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.495 06:32:04 -- accel/accel.sh@21 -- # val= 00:06:53.495 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.495 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.495 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.495 06:32:04 -- accel/accel.sh@21 -- # val=0x1 00:06:53.495 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.495 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.496 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.496 06:32:04 -- accel/accel.sh@21 -- # val= 00:06:53.496 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.496 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.496 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.496 06:32:04 -- accel/accel.sh@21 -- # val= 00:06:53.496 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.496 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.496 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.496 06:32:04 -- accel/accel.sh@21 -- # val=compress 00:06:53.496 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.496 06:32:04 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:53.496 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.496 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.496 06:32:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:53.496 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.496 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.496 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.496 06:32:04 -- accel/accel.sh@21 -- # val= 00:06:53.496 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.496 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.496 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.496 06:32:04 -- accel/accel.sh@21 -- # val=software 00:06:53.496 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.496 06:32:04 -- accel/accel.sh@23 -- # accel_module=software 00:06:53.496 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.496 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.496 06:32:04 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:53.496 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.496 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.496 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.496 06:32:04 -- accel/accel.sh@21 -- # val=32 00:06:53.496 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.496 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.496 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.496 06:32:04 -- accel/accel.sh@21 -- # val=32 00:06:53.496 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.496 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.496 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.496 06:32:04 -- accel/accel.sh@21 -- # val=1 00:06:53.496 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.496 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.496 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.496 06:32:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:53.496 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.496 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.496 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.496 06:32:04 -- accel/accel.sh@21 -- # val=No 00:06:53.496 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.496 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.496 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.496 06:32:04 -- accel/accel.sh@21 -- # val= 00:06:53.496 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.496 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.496 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:06:53.496 06:32:04 -- accel/accel.sh@21 -- # val= 00:06:53.496 06:32:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.496 06:32:04 -- accel/accel.sh@20 -- # IFS=: 00:06:53.496 06:32:04 -- accel/accel.sh@20 -- # read -r var val 00:06:54.869 06:32:05 -- accel/accel.sh@21 -- # val= 00:06:54.869 06:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.869 06:32:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.869 06:32:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.869 06:32:05 -- accel/accel.sh@21 -- # val= 00:06:54.869 06:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.869 06:32:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.869 06:32:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.869 06:32:05 -- accel/accel.sh@21 -- # val= 00:06:54.869 06:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.869 06:32:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.869 06:32:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.869 06:32:05 -- accel/accel.sh@21 -- # val= 00:06:54.869 06:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.869 06:32:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.869 06:32:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.869 06:32:05 -- accel/accel.sh@21 -- # val= 00:06:54.870 06:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.870 06:32:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.870 06:32:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.870 06:32:05 -- accel/accel.sh@21 -- # val= 00:06:54.870 06:32:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.870 06:32:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.870 06:32:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.870 06:32:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:54.870 06:32:05 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:54.870 06:32:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.870 00:06:54.870 real 0m2.723s 00:06:54.870 user 0m2.315s 00:06:54.870 sys 0m0.206s 00:06:54.870 06:32:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:54.870 06:32:05 -- common/autotest_common.sh@10 -- # set +x 00:06:54.870 ************************************ 00:06:54.870 END TEST accel_comp 00:06:54.870 ************************************ 00:06:54.870 06:32:05 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:54.870 06:32:05 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:54.870 06:32:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:54.870 06:32:05 -- common/autotest_common.sh@10 -- # set +x 00:06:54.870 ************************************ 00:06:54.870 START TEST accel_decomp 00:06:54.870 ************************************ 00:06:54.870 06:32:05 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:54.870 06:32:05 -- accel/accel.sh@16 -- # local accel_opc 00:06:54.870 06:32:05 -- accel/accel.sh@17 -- # local accel_module 00:06:54.870 06:32:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:54.870 06:32:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:54.870 06:32:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.870 06:32:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.870 06:32:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.870 06:32:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.870 06:32:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.870 06:32:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.870 06:32:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.870 06:32:05 -- accel/accel.sh@42 -- # jq -r . 00:06:54.870 [2024-11-28 06:32:05.389717] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:54.870 [2024-11-28 06:32:05.389817] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70987 ] 00:06:54.870 [2024-11-28 06:32:05.522832] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.870 [2024-11-28 06:32:05.567698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.244 06:32:06 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:56.244 00:06:56.244 SPDK Configuration: 00:06:56.244 Core mask: 0x1 00:06:56.244 00:06:56.244 Accel Perf Configuration: 00:06:56.244 Workload Type: decompress 00:06:56.244 Transfer size: 4096 bytes 00:06:56.244 Vector count 1 00:06:56.244 Module: software 00:06:56.244 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:56.244 Queue depth: 32 00:06:56.244 Allocate depth: 32 00:06:56.244 # threads/core: 1 00:06:56.244 Run time: 1 seconds 00:06:56.244 Verify: Yes 00:06:56.244 00:06:56.244 Running for 1 seconds... 00:06:56.244 00:06:56.244 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:56.244 ------------------------------------------------------------------------------------ 00:06:56.244 0,0 83712/s 154 MiB/s 0 0 00:06:56.244 ==================================================================================== 00:06:56.244 Total 83712/s 327 MiB/s 0 0' 00:06:56.244 06:32:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:56.244 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:06:56.244 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:06:56.244 06:32:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:56.244 06:32:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.244 06:32:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.244 06:32:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.244 06:32:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.244 06:32:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.244 06:32:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.244 06:32:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.244 06:32:06 -- accel/accel.sh@42 -- # jq -r . 00:06:56.244 [2024-11-28 06:32:06.749137] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:56.244 [2024-11-28 06:32:06.749229] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71007 ] 00:06:56.244 [2024-11-28 06:32:06.879215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.244 [2024-11-28 06:32:06.916973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.244 06:32:06 -- accel/accel.sh@21 -- # val= 00:06:56.244 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.244 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:06:56.244 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:06:56.244 06:32:06 -- accel/accel.sh@21 -- # val= 00:06:56.244 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.244 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:06:56.244 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:06:56.244 06:32:06 -- accel/accel.sh@21 -- # val= 00:06:56.244 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.244 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:06:56.244 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:06:56.244 06:32:06 -- accel/accel.sh@21 -- # val=0x1 00:06:56.244 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.244 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:06:56.244 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:06:56.244 06:32:06 -- accel/accel.sh@21 -- # val= 00:06:56.244 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.244 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:06:56.244 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:06:56.244 06:32:06 -- accel/accel.sh@21 -- # val= 00:06:56.244 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.244 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:06:56.244 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:06:56.244 06:32:06 -- accel/accel.sh@21 -- # val=decompress 00:06:56.244 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.244 06:32:06 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:56.244 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:06:56.244 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:06:56.244 06:32:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:56.244 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.244 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:06:56.244 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:06:56.244 06:32:06 -- accel/accel.sh@21 -- # val= 00:06:56.244 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.244 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:06:56.244 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:06:56.244 06:32:06 -- accel/accel.sh@21 -- # val=software 00:06:56.244 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.244 06:32:06 -- accel/accel.sh@23 -- # accel_module=software 00:06:56.244 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:06:56.244 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:06:56.244 06:32:06 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:56.244 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.244 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:06:56.244 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:06:56.244 06:32:06 -- accel/accel.sh@21 -- # val=32 00:06:56.244 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.244 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:06:56.244 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:06:56.244 06:32:06 -- accel/accel.sh@21 -- # val=32 00:06:56.244 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.244 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:06:56.244 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:06:56.244 06:32:06 -- accel/accel.sh@21 -- # val=1 00:06:56.244 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.244 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:06:56.244 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:06:56.244 06:32:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:56.244 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.244 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:06:56.244 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:06:56.244 06:32:06 -- accel/accel.sh@21 -- # val=Yes 00:06:56.244 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.245 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:06:56.245 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:06:56.245 06:32:06 -- accel/accel.sh@21 -- # val= 00:06:56.245 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.245 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:06:56.245 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:06:56.245 06:32:06 -- accel/accel.sh@21 -- # val= 00:06:56.245 06:32:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.245 06:32:06 -- accel/accel.sh@20 -- # IFS=: 00:06:56.245 06:32:06 -- accel/accel.sh@20 -- # read -r var val 00:06:57.620 06:32:08 -- accel/accel.sh@21 -- # val= 00:06:57.620 06:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.620 06:32:08 -- accel/accel.sh@20 -- # IFS=: 00:06:57.620 06:32:08 -- accel/accel.sh@20 -- # read -r var val 00:06:57.620 06:32:08 -- accel/accel.sh@21 -- # val= 00:06:57.620 06:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.620 06:32:08 -- accel/accel.sh@20 -- # IFS=: 00:06:57.620 06:32:08 -- accel/accel.sh@20 -- # read -r var val 00:06:57.620 06:32:08 -- accel/accel.sh@21 -- # val= 00:06:57.620 06:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.620 06:32:08 -- accel/accel.sh@20 -- # IFS=: 00:06:57.620 06:32:08 -- accel/accel.sh@20 -- # read -r var val 00:06:57.620 06:32:08 -- accel/accel.sh@21 -- # val= 00:06:57.620 06:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.620 06:32:08 -- accel/accel.sh@20 -- # IFS=: 00:06:57.620 06:32:08 -- accel/accel.sh@20 -- # read -r var val 00:06:57.620 06:32:08 -- accel/accel.sh@21 -- # val= 00:06:57.620 06:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.620 06:32:08 -- accel/accel.sh@20 -- # IFS=: 00:06:57.620 06:32:08 -- accel/accel.sh@20 -- # read -r var val 00:06:57.620 06:32:08 -- accel/accel.sh@21 -- # val= 00:06:57.620 06:32:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.620 06:32:08 -- accel/accel.sh@20 -- # IFS=: 00:06:57.620 06:32:08 -- accel/accel.sh@20 -- # read -r var val 00:06:57.620 06:32:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:57.620 06:32:08 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:57.620 06:32:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.620 00:06:57.620 real 0m2.720s 00:06:57.620 user 0m2.324s 00:06:57.620 sys 0m0.199s 00:06:57.620 06:32:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:57.620 06:32:08 -- common/autotest_common.sh@10 -- # set +x 00:06:57.620 ************************************ 00:06:57.620 END TEST accel_decomp 00:06:57.620 ************************************ 00:06:57.620 06:32:08 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:57.620 06:32:08 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:57.620 06:32:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:57.620 06:32:08 -- common/autotest_common.sh@10 -- # set +x 00:06:57.620 ************************************ 00:06:57.620 START TEST accel_decmop_full 00:06:57.620 ************************************ 00:06:57.620 06:32:08 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:57.620 06:32:08 -- accel/accel.sh@16 -- # local accel_opc 00:06:57.620 06:32:08 -- accel/accel.sh@17 -- # local accel_module 00:06:57.620 06:32:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:57.620 06:32:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:57.620 06:32:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.620 06:32:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.620 06:32:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.620 06:32:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.620 06:32:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.620 06:32:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.620 06:32:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.620 06:32:08 -- accel/accel.sh@42 -- # jq -r . 00:06:57.620 [2024-11-28 06:32:08.147283] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:57.620 [2024-11-28 06:32:08.147390] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71037 ] 00:06:57.620 [2024-11-28 06:32:08.282272] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.620 [2024-11-28 06:32:08.318519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.021 06:32:09 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:59.021 00:06:59.021 SPDK Configuration: 00:06:59.021 Core mask: 0x1 00:06:59.021 00:06:59.021 Accel Perf Configuration: 00:06:59.021 Workload Type: decompress 00:06:59.021 Transfer size: 111250 bytes 00:06:59.021 Vector count 1 00:06:59.021 Module: software 00:06:59.021 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:59.021 Queue depth: 32 00:06:59.021 Allocate depth: 32 00:06:59.021 # threads/core: 1 00:06:59.021 Run time: 1 seconds 00:06:59.021 Verify: Yes 00:06:59.021 00:06:59.021 Running for 1 seconds... 00:06:59.021 00:06:59.021 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:59.021 ------------------------------------------------------------------------------------ 00:06:59.021 0,0 5824/s 240 MiB/s 0 0 00:06:59.021 ==================================================================================== 00:06:59.021 Total 5824/s 617 MiB/s 0 0' 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:06:59.021 06:32:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:59.021 06:32:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:59.021 06:32:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.021 06:32:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.021 06:32:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.021 06:32:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.021 06:32:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.021 06:32:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.021 06:32:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.021 06:32:09 -- accel/accel.sh@42 -- # jq -r . 00:06:59.021 [2024-11-28 06:32:09.524622] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:59.021 [2024-11-28 06:32:09.524999] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71060 ] 00:06:59.021 [2024-11-28 06:32:09.655565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.021 [2024-11-28 06:32:09.692337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.021 06:32:09 -- accel/accel.sh@21 -- # val= 00:06:59.021 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:06:59.021 06:32:09 -- accel/accel.sh@21 -- # val= 00:06:59.021 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:06:59.021 06:32:09 -- accel/accel.sh@21 -- # val= 00:06:59.021 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:06:59.021 06:32:09 -- accel/accel.sh@21 -- # val=0x1 00:06:59.021 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:06:59.021 06:32:09 -- accel/accel.sh@21 -- # val= 00:06:59.021 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:06:59.021 06:32:09 -- accel/accel.sh@21 -- # val= 00:06:59.021 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:06:59.021 06:32:09 -- accel/accel.sh@21 -- # val=decompress 00:06:59.021 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.021 06:32:09 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:06:59.021 06:32:09 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:59.021 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:06:59.021 06:32:09 -- accel/accel.sh@21 -- # val= 00:06:59.021 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:06:59.021 06:32:09 -- accel/accel.sh@21 -- # val=software 00:06:59.021 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.021 06:32:09 -- accel/accel.sh@23 -- # accel_module=software 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:06:59.021 06:32:09 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:59.021 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:06:59.021 06:32:09 -- accel/accel.sh@21 -- # val=32 00:06:59.021 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:06:59.021 06:32:09 -- accel/accel.sh@21 -- # val=32 00:06:59.021 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:06:59.021 06:32:09 -- accel/accel.sh@21 -- # val=1 00:06:59.021 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:06:59.021 06:32:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:59.021 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:06:59.021 06:32:09 -- accel/accel.sh@21 -- # val=Yes 00:06:59.021 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:06:59.021 06:32:09 -- accel/accel.sh@21 -- # val= 00:06:59.021 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:06:59.021 06:32:09 -- accel/accel.sh@21 -- # val= 00:06:59.021 06:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # IFS=: 00:06:59.021 06:32:09 -- accel/accel.sh@20 -- # read -r var val 00:07:00.395 06:32:10 -- accel/accel.sh@21 -- # val= 00:07:00.395 06:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.395 06:32:10 -- accel/accel.sh@20 -- # IFS=: 00:07:00.395 06:32:10 -- accel/accel.sh@20 -- # read -r var val 00:07:00.395 06:32:10 -- accel/accel.sh@21 -- # val= 00:07:00.395 06:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.395 06:32:10 -- accel/accel.sh@20 -- # IFS=: 00:07:00.395 06:32:10 -- accel/accel.sh@20 -- # read -r var val 00:07:00.395 06:32:10 -- accel/accel.sh@21 -- # val= 00:07:00.395 06:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.395 06:32:10 -- accel/accel.sh@20 -- # IFS=: 00:07:00.395 06:32:10 -- accel/accel.sh@20 -- # read -r var val 00:07:00.395 06:32:10 -- accel/accel.sh@21 -- # val= 00:07:00.395 06:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.395 06:32:10 -- accel/accel.sh@20 -- # IFS=: 00:07:00.395 06:32:10 -- accel/accel.sh@20 -- # read -r var val 00:07:00.395 06:32:10 -- accel/accel.sh@21 -- # val= 00:07:00.395 06:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.395 06:32:10 -- accel/accel.sh@20 -- # IFS=: 00:07:00.395 06:32:10 -- accel/accel.sh@20 -- # read -r var val 00:07:00.395 06:32:10 -- accel/accel.sh@21 -- # val= 00:07:00.395 06:32:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.395 06:32:10 -- accel/accel.sh@20 -- # IFS=: 00:07:00.395 06:32:10 -- accel/accel.sh@20 -- # read -r var val 00:07:00.395 06:32:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:00.395 06:32:10 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:00.395 06:32:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.395 00:07:00.395 real 0m2.750s 00:07:00.395 user 0m2.349s 00:07:00.395 sys 0m0.197s 00:07:00.395 06:32:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:00.395 06:32:10 -- common/autotest_common.sh@10 -- # set +x 00:07:00.395 ************************************ 00:07:00.395 END TEST accel_decmop_full 00:07:00.395 ************************************ 00:07:00.395 06:32:10 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:00.395 06:32:10 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:00.395 06:32:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.395 06:32:10 -- common/autotest_common.sh@10 -- # set +x 00:07:00.395 ************************************ 00:07:00.395 START TEST accel_decomp_mcore 00:07:00.395 ************************************ 00:07:00.395 06:32:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:00.395 06:32:10 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.395 06:32:10 -- accel/accel.sh@17 -- # local accel_module 00:07:00.395 06:32:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:00.395 06:32:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:00.395 06:32:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.395 06:32:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.395 06:32:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.395 06:32:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.395 06:32:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.395 06:32:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.396 06:32:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.396 06:32:10 -- accel/accel.sh@42 -- # jq -r . 00:07:00.396 [2024-11-28 06:32:10.936070] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:00.396 [2024-11-28 06:32:10.936471] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71095 ] 00:07:00.396 [2024-11-28 06:32:11.067228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:00.396 [2024-11-28 06:32:11.107172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.396 [2024-11-28 06:32:11.107480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.396 [2024-11-28 06:32:11.107482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.396 [2024-11-28 06:32:11.107547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.770 06:32:12 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:01.770 00:07:01.770 SPDK Configuration: 00:07:01.770 Core mask: 0xf 00:07:01.770 00:07:01.770 Accel Perf Configuration: 00:07:01.770 Workload Type: decompress 00:07:01.770 Transfer size: 4096 bytes 00:07:01.770 Vector count 1 00:07:01.770 Module: software 00:07:01.770 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:01.770 Queue depth: 32 00:07:01.770 Allocate depth: 32 00:07:01.770 # threads/core: 1 00:07:01.770 Run time: 1 seconds 00:07:01.770 Verify: Yes 00:07:01.770 00:07:01.770 Running for 1 seconds... 00:07:01.770 00:07:01.770 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:01.770 ------------------------------------------------------------------------------------ 00:07:01.770 0,0 73088/s 134 MiB/s 0 0 00:07:01.770 3,0 52576/s 96 MiB/s 0 0 00:07:01.770 2,0 52512/s 96 MiB/s 0 0 00:07:01.770 1,0 52640/s 97 MiB/s 0 0 00:07:01.770 ==================================================================================== 00:07:01.770 Total 230816/s 901 MiB/s 0 0' 00:07:01.770 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.770 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.770 06:32:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:01.770 06:32:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:01.770 06:32:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.770 06:32:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.770 06:32:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.770 06:32:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.770 06:32:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.770 06:32:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.770 06:32:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.770 06:32:12 -- accel/accel.sh@42 -- # jq -r . 00:07:01.770 [2024-11-28 06:32:12.308261] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:01.770 [2024-11-28 06:32:12.308370] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71113 ] 00:07:01.770 [2024-11-28 06:32:12.438912] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:01.770 [2024-11-28 06:32:12.480841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.770 [2024-11-28 06:32:12.480944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.770 [2024-11-28 06:32:12.481253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.770 [2024-11-28 06:32:12.481285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.770 06:32:12 -- accel/accel.sh@21 -- # val= 00:07:01.770 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.770 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.770 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.770 06:32:12 -- accel/accel.sh@21 -- # val= 00:07:01.770 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.770 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.770 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.770 06:32:12 -- accel/accel.sh@21 -- # val= 00:07:01.770 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.770 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.770 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.770 06:32:12 -- accel/accel.sh@21 -- # val=0xf 00:07:01.770 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.770 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.770 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.770 06:32:12 -- accel/accel.sh@21 -- # val= 00:07:01.770 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.770 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.770 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.770 06:32:12 -- accel/accel.sh@21 -- # val= 00:07:01.770 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.770 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.770 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.770 06:32:12 -- accel/accel.sh@21 -- # val=decompress 00:07:01.770 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.770 06:32:12 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:01.770 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.770 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.770 06:32:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:01.770 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.770 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.770 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.770 06:32:12 -- accel/accel.sh@21 -- # val= 00:07:01.770 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.770 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.770 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.770 06:32:12 -- accel/accel.sh@21 -- # val=software 00:07:01.770 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.770 06:32:12 -- accel/accel.sh@23 -- # accel_module=software 00:07:01.770 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.770 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.770 06:32:12 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:01.770 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.770 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.770 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.770 06:32:12 -- accel/accel.sh@21 -- # val=32 00:07:01.770 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.770 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.770 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.770 06:32:12 -- accel/accel.sh@21 -- # val=32 00:07:01.771 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.771 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.771 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.771 06:32:12 -- accel/accel.sh@21 -- # val=1 00:07:01.771 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.771 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.771 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.771 06:32:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:01.771 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.771 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.771 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.771 06:32:12 -- accel/accel.sh@21 -- # val=Yes 00:07:01.771 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.771 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.771 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.771 06:32:12 -- accel/accel.sh@21 -- # val= 00:07:01.771 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.771 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.771 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.771 06:32:12 -- accel/accel.sh@21 -- # val= 00:07:01.771 06:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.771 06:32:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.771 06:32:12 -- accel/accel.sh@20 -- # read -r var val 00:07:03.145 06:32:13 -- accel/accel.sh@21 -- # val= 00:07:03.145 06:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.145 06:32:13 -- accel/accel.sh@20 -- # IFS=: 00:07:03.145 06:32:13 -- accel/accel.sh@20 -- # read -r var val 00:07:03.145 06:32:13 -- accel/accel.sh@21 -- # val= 00:07:03.145 06:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.145 06:32:13 -- accel/accel.sh@20 -- # IFS=: 00:07:03.145 06:32:13 -- accel/accel.sh@20 -- # read -r var val 00:07:03.145 06:32:13 -- accel/accel.sh@21 -- # val= 00:07:03.145 06:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.145 06:32:13 -- accel/accel.sh@20 -- # IFS=: 00:07:03.145 06:32:13 -- accel/accel.sh@20 -- # read -r var val 00:07:03.145 06:32:13 -- accel/accel.sh@21 -- # val= 00:07:03.145 06:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.145 06:32:13 -- accel/accel.sh@20 -- # IFS=: 00:07:03.145 06:32:13 -- accel/accel.sh@20 -- # read -r var val 00:07:03.145 06:32:13 -- accel/accel.sh@21 -- # val= 00:07:03.145 06:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.145 06:32:13 -- accel/accel.sh@20 -- # IFS=: 00:07:03.145 06:32:13 -- accel/accel.sh@20 -- # read -r var val 00:07:03.145 06:32:13 -- accel/accel.sh@21 -- # val= 00:07:03.145 06:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.145 06:32:13 -- accel/accel.sh@20 -- # IFS=: 00:07:03.145 06:32:13 -- accel/accel.sh@20 -- # read -r var val 00:07:03.145 06:32:13 -- accel/accel.sh@21 -- # val= 00:07:03.145 06:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.145 06:32:13 -- accel/accel.sh@20 -- # IFS=: 00:07:03.145 06:32:13 -- accel/accel.sh@20 -- # read -r var val 00:07:03.145 06:32:13 -- accel/accel.sh@21 -- # val= 00:07:03.145 06:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.145 06:32:13 -- accel/accel.sh@20 -- # IFS=: 00:07:03.145 06:32:13 -- accel/accel.sh@20 -- # read -r var val 00:07:03.145 06:32:13 -- accel/accel.sh@21 -- # val= 00:07:03.145 06:32:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.145 06:32:13 -- accel/accel.sh@20 -- # IFS=: 00:07:03.145 06:32:13 -- accel/accel.sh@20 -- # read -r var val 00:07:03.145 06:32:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:03.145 06:32:13 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:03.145 06:32:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.145 00:07:03.145 real 0m2.754s 00:07:03.145 user 0m8.899s 00:07:03.145 sys 0m0.259s 00:07:03.145 06:32:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.145 ************************************ 00:07:03.145 END TEST accel_decomp_mcore 00:07:03.145 ************************************ 00:07:03.145 06:32:13 -- common/autotest_common.sh@10 -- # set +x 00:07:03.145 06:32:13 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:03.145 06:32:13 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:03.145 06:32:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.145 06:32:13 -- common/autotest_common.sh@10 -- # set +x 00:07:03.145 ************************************ 00:07:03.145 START TEST accel_decomp_full_mcore 00:07:03.145 ************************************ 00:07:03.145 06:32:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:03.145 06:32:13 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.145 06:32:13 -- accel/accel.sh@17 -- # local accel_module 00:07:03.145 06:32:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:03.145 06:32:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:03.145 06:32:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.145 06:32:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.145 06:32:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.145 06:32:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.145 06:32:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.145 06:32:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.145 06:32:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.145 06:32:13 -- accel/accel.sh@42 -- # jq -r . 00:07:03.145 [2024-11-28 06:32:13.761206] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:03.145 [2024-11-28 06:32:13.761370] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71152 ] 00:07:03.145 [2024-11-28 06:32:13.899141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:03.403 [2024-11-28 06:32:13.946619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.403 [2024-11-28 06:32:13.946851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.403 [2024-11-28 06:32:13.947023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.403 [2024-11-28 06:32:13.947115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:04.777 06:32:15 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:04.777 00:07:04.777 SPDK Configuration: 00:07:04.777 Core mask: 0xf 00:07:04.777 00:07:04.777 Accel Perf Configuration: 00:07:04.777 Workload Type: decompress 00:07:04.777 Transfer size: 111250 bytes 00:07:04.777 Vector count 1 00:07:04.777 Module: software 00:07:04.777 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:04.777 Queue depth: 32 00:07:04.777 Allocate depth: 32 00:07:04.777 # threads/core: 1 00:07:04.777 Run time: 1 seconds 00:07:04.777 Verify: Yes 00:07:04.777 00:07:04.777 Running for 1 seconds... 00:07:04.777 00:07:04.777 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:04.777 ------------------------------------------------------------------------------------ 00:07:04.777 0,0 5760/s 237 MiB/s 0 0 00:07:04.777 3,0 4320/s 178 MiB/s 0 0 00:07:04.777 2,0 4064/s 167 MiB/s 0 0 00:07:04.777 1,0 4064/s 167 MiB/s 0 0 00:07:04.777 ==================================================================================== 00:07:04.777 Total 18208/s 1931 MiB/s 0 0' 00:07:04.777 06:32:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.777 06:32:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.777 06:32:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:04.777 06:32:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:04.777 06:32:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.777 06:32:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.777 06:32:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.777 06:32:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.777 06:32:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.777 06:32:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.777 06:32:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.777 06:32:15 -- accel/accel.sh@42 -- # jq -r . 00:07:04.777 [2024-11-28 06:32:15.175617] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:04.777 [2024-11-28 06:32:15.175738] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71175 ] 00:07:04.777 [2024-11-28 06:32:15.308195] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:04.777 [2024-11-28 06:32:15.355228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.777 [2024-11-28 06:32:15.355565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.777 [2024-11-28 06:32:15.355817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.777 [2024-11-28 06:32:15.355875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:04.777 06:32:15 -- accel/accel.sh@21 -- # val= 00:07:04.777 06:32:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.777 06:32:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.777 06:32:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.777 06:32:15 -- accel/accel.sh@21 -- # val= 00:07:04.777 06:32:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.777 06:32:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.777 06:32:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.777 06:32:15 -- accel/accel.sh@21 -- # val= 00:07:04.777 06:32:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.777 06:32:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.777 06:32:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.777 06:32:15 -- accel/accel.sh@21 -- # val=0xf 00:07:04.777 06:32:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.777 06:32:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.777 06:32:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.777 06:32:15 -- accel/accel.sh@21 -- # val= 00:07:04.777 06:32:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.777 06:32:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.777 06:32:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.777 06:32:15 -- accel/accel.sh@21 -- # val= 00:07:04.777 06:32:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.777 06:32:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.777 06:32:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.777 06:32:15 -- accel/accel.sh@21 -- # val=decompress 00:07:04.777 06:32:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.777 06:32:15 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:04.777 06:32:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.777 06:32:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.777 06:32:15 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:04.777 06:32:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.777 06:32:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.777 06:32:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.777 06:32:15 -- accel/accel.sh@21 -- # val= 00:07:04.777 06:32:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.777 06:32:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.777 06:32:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.777 06:32:15 -- accel/accel.sh@21 -- # val=software 00:07:04.777 06:32:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.777 06:32:15 -- accel/accel.sh@23 -- # accel_module=software 00:07:04.777 06:32:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.777 06:32:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.777 06:32:15 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:04.777 06:32:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.777 06:32:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.777 06:32:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.777 06:32:15 -- accel/accel.sh@21 -- # val=32 00:07:04.777 06:32:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.777 06:32:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.777 06:32:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.777 06:32:15 -- accel/accel.sh@21 -- # val=32 00:07:04.777 06:32:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.778 06:32:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.778 06:32:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.778 06:32:15 -- accel/accel.sh@21 -- # val=1 00:07:04.778 06:32:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.778 06:32:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.778 06:32:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.778 06:32:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:04.778 06:32:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.778 06:32:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.778 06:32:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.778 06:32:15 -- accel/accel.sh@21 -- # val=Yes 00:07:04.778 06:32:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.778 06:32:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.778 06:32:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.778 06:32:15 -- accel/accel.sh@21 -- # val= 00:07:04.778 06:32:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.778 06:32:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.778 06:32:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.778 06:32:15 -- accel/accel.sh@21 -- # val= 00:07:04.778 06:32:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.778 06:32:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.778 06:32:15 -- accel/accel.sh@20 -- # read -r var val 00:07:06.152 06:32:16 -- accel/accel.sh@21 -- # val= 00:07:06.152 06:32:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.152 06:32:16 -- accel/accel.sh@20 -- # IFS=: 00:07:06.152 06:32:16 -- accel/accel.sh@20 -- # read -r var val 00:07:06.152 06:32:16 -- accel/accel.sh@21 -- # val= 00:07:06.152 06:32:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.152 06:32:16 -- accel/accel.sh@20 -- # IFS=: 00:07:06.152 06:32:16 -- accel/accel.sh@20 -- # read -r var val 00:07:06.152 06:32:16 -- accel/accel.sh@21 -- # val= 00:07:06.152 06:32:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.152 06:32:16 -- accel/accel.sh@20 -- # IFS=: 00:07:06.152 06:32:16 -- accel/accel.sh@20 -- # read -r var val 00:07:06.152 06:32:16 -- accel/accel.sh@21 -- # val= 00:07:06.152 06:32:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.152 06:32:16 -- accel/accel.sh@20 -- # IFS=: 00:07:06.152 06:32:16 -- accel/accel.sh@20 -- # read -r var val 00:07:06.152 06:32:16 -- accel/accel.sh@21 -- # val= 00:07:06.152 06:32:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.152 06:32:16 -- accel/accel.sh@20 -- # IFS=: 00:07:06.152 06:32:16 -- accel/accel.sh@20 -- # read -r var val 00:07:06.152 06:32:16 -- accel/accel.sh@21 -- # val= 00:07:06.152 06:32:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.152 06:32:16 -- accel/accel.sh@20 -- # IFS=: 00:07:06.152 06:32:16 -- accel/accel.sh@20 -- # read -r var val 00:07:06.152 06:32:16 -- accel/accel.sh@21 -- # val= 00:07:06.152 06:32:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.152 06:32:16 -- accel/accel.sh@20 -- # IFS=: 00:07:06.152 06:32:16 -- accel/accel.sh@20 -- # read -r var val 00:07:06.152 06:32:16 -- accel/accel.sh@21 -- # val= 00:07:06.152 06:32:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.152 06:32:16 -- accel/accel.sh@20 -- # IFS=: 00:07:06.152 06:32:16 -- accel/accel.sh@20 -- # read -r var val 00:07:06.152 06:32:16 -- accel/accel.sh@21 -- # val= 00:07:06.152 06:32:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.152 06:32:16 -- accel/accel.sh@20 -- # IFS=: 00:07:06.152 06:32:16 -- accel/accel.sh@20 -- # read -r var val 00:07:06.152 06:32:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:06.152 06:32:16 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:06.152 06:32:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.152 00:07:06.152 real 0m2.834s 00:07:06.152 user 0m9.035s 00:07:06.152 sys 0m0.293s 00:07:06.152 06:32:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:06.152 ************************************ 00:07:06.152 END TEST accel_decomp_full_mcore 00:07:06.152 ************************************ 00:07:06.152 06:32:16 -- common/autotest_common.sh@10 -- # set +x 00:07:06.152 06:32:16 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:06.152 06:32:16 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:06.152 06:32:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:06.152 06:32:16 -- common/autotest_common.sh@10 -- # set +x 00:07:06.152 ************************************ 00:07:06.152 START TEST accel_decomp_mthread 00:07:06.152 ************************************ 00:07:06.152 06:32:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:06.152 06:32:16 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.152 06:32:16 -- accel/accel.sh@17 -- # local accel_module 00:07:06.152 06:32:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:06.153 06:32:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:06.153 06:32:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.153 06:32:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.153 06:32:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.153 06:32:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.153 06:32:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.153 06:32:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.153 06:32:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.153 06:32:16 -- accel/accel.sh@42 -- # jq -r . 00:07:06.153 [2024-11-28 06:32:16.644303] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:06.153 [2024-11-28 06:32:16.644407] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71214 ] 00:07:06.153 [2024-11-28 06:32:16.779283] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.153 [2024-11-28 06:32:16.833892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.529 06:32:18 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:07.529 00:07:07.529 SPDK Configuration: 00:07:07.529 Core mask: 0x1 00:07:07.529 00:07:07.529 Accel Perf Configuration: 00:07:07.529 Workload Type: decompress 00:07:07.529 Transfer size: 4096 bytes 00:07:07.529 Vector count 1 00:07:07.529 Module: software 00:07:07.529 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:07.529 Queue depth: 32 00:07:07.529 Allocate depth: 32 00:07:07.529 # threads/core: 2 00:07:07.529 Run time: 1 seconds 00:07:07.529 Verify: Yes 00:07:07.529 00:07:07.529 Running for 1 seconds... 00:07:07.529 00:07:07.529 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:07.529 ------------------------------------------------------------------------------------ 00:07:07.529 0,1 42112/s 77 MiB/s 0 0 00:07:07.529 0,0 41984/s 77 MiB/s 0 0 00:07:07.529 ==================================================================================== 00:07:07.529 Total 84096/s 328 MiB/s 0 0' 00:07:07.529 06:32:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.529 06:32:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:07.529 06:32:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.529 06:32:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.529 06:32:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.529 06:32:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.529 06:32:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.529 06:32:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.529 06:32:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.529 06:32:18 -- accel/accel.sh@42 -- # jq -r . 00:07:07.529 [2024-11-28 06:32:18.035424] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:07.529 [2024-11-28 06:32:18.035524] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71229 ] 00:07:07.529 [2024-11-28 06:32:18.167109] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.529 [2024-11-28 06:32:18.205433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.529 06:32:18 -- accel/accel.sh@21 -- # val= 00:07:07.529 06:32:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.529 06:32:18 -- accel/accel.sh@21 -- # val= 00:07:07.529 06:32:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.529 06:32:18 -- accel/accel.sh@21 -- # val= 00:07:07.529 06:32:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.529 06:32:18 -- accel/accel.sh@21 -- # val=0x1 00:07:07.529 06:32:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.529 06:32:18 -- accel/accel.sh@21 -- # val= 00:07:07.529 06:32:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.529 06:32:18 -- accel/accel.sh@21 -- # val= 00:07:07.529 06:32:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.529 06:32:18 -- accel/accel.sh@21 -- # val=decompress 00:07:07.529 06:32:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.529 06:32:18 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.529 06:32:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:07.529 06:32:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.529 06:32:18 -- accel/accel.sh@21 -- # val= 00:07:07.529 06:32:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.529 06:32:18 -- accel/accel.sh@21 -- # val=software 00:07:07.529 06:32:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.529 06:32:18 -- accel/accel.sh@23 -- # accel_module=software 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.529 06:32:18 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:07.529 06:32:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.529 06:32:18 -- accel/accel.sh@21 -- # val=32 00:07:07.529 06:32:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.529 06:32:18 -- accel/accel.sh@21 -- # val=32 00:07:07.529 06:32:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.529 06:32:18 -- accel/accel.sh@21 -- # val=2 00:07:07.529 06:32:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.529 06:32:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:07.529 06:32:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.529 06:32:18 -- accel/accel.sh@21 -- # val=Yes 00:07:07.529 06:32:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.529 06:32:18 -- accel/accel.sh@21 -- # val= 00:07:07.529 06:32:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.529 06:32:18 -- accel/accel.sh@21 -- # val= 00:07:07.529 06:32:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.529 06:32:18 -- accel/accel.sh@20 -- # read -r var val 00:07:08.905 06:32:19 -- accel/accel.sh@21 -- # val= 00:07:08.905 06:32:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.905 06:32:19 -- accel/accel.sh@20 -- # IFS=: 00:07:08.905 06:32:19 -- accel/accel.sh@20 -- # read -r var val 00:07:08.905 06:32:19 -- accel/accel.sh@21 -- # val= 00:07:08.905 06:32:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.905 06:32:19 -- accel/accel.sh@20 -- # IFS=: 00:07:08.905 06:32:19 -- accel/accel.sh@20 -- # read -r var val 00:07:08.905 06:32:19 -- accel/accel.sh@21 -- # val= 00:07:08.905 06:32:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.905 06:32:19 -- accel/accel.sh@20 -- # IFS=: 00:07:08.905 06:32:19 -- accel/accel.sh@20 -- # read -r var val 00:07:08.905 06:32:19 -- accel/accel.sh@21 -- # val= 00:07:08.905 06:32:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.905 06:32:19 -- accel/accel.sh@20 -- # IFS=: 00:07:08.905 06:32:19 -- accel/accel.sh@20 -- # read -r var val 00:07:08.905 06:32:19 -- accel/accel.sh@21 -- # val= 00:07:08.905 06:32:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.905 06:32:19 -- accel/accel.sh@20 -- # IFS=: 00:07:08.905 06:32:19 -- accel/accel.sh@20 -- # read -r var val 00:07:08.905 06:32:19 -- accel/accel.sh@21 -- # val= 00:07:08.905 06:32:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.905 06:32:19 -- accel/accel.sh@20 -- # IFS=: 00:07:08.905 06:32:19 -- accel/accel.sh@20 -- # read -r var val 00:07:08.905 06:32:19 -- accel/accel.sh@21 -- # val= 00:07:08.905 06:32:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.905 06:32:19 -- accel/accel.sh@20 -- # IFS=: 00:07:08.905 06:32:19 -- accel/accel.sh@20 -- # read -r var val 00:07:08.905 06:32:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:08.905 06:32:19 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:08.905 ************************************ 00:07:08.905 END TEST accel_decomp_mthread 00:07:08.905 ************************************ 00:07:08.905 06:32:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.905 00:07:08.905 real 0m2.765s 00:07:08.905 user 0m2.338s 00:07:08.905 sys 0m0.227s 00:07:08.905 06:32:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:08.905 06:32:19 -- common/autotest_common.sh@10 -- # set +x 00:07:08.905 06:32:19 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:08.905 06:32:19 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:08.905 06:32:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.905 06:32:19 -- common/autotest_common.sh@10 -- # set +x 00:07:08.905 ************************************ 00:07:08.905 START TEST accel_deomp_full_mthread 00:07:08.905 ************************************ 00:07:08.905 06:32:19 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:08.905 06:32:19 -- accel/accel.sh@16 -- # local accel_opc 00:07:08.905 06:32:19 -- accel/accel.sh@17 -- # local accel_module 00:07:08.905 06:32:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:08.905 06:32:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:08.905 06:32:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.905 06:32:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.905 06:32:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.905 06:32:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.905 06:32:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.905 06:32:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.905 06:32:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.905 06:32:19 -- accel/accel.sh@42 -- # jq -r . 00:07:08.905 [2024-11-28 06:32:19.463612] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:08.905 [2024-11-28 06:32:19.463800] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71266 ] 00:07:08.905 [2024-11-28 06:32:19.597394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.905 [2024-11-28 06:32:19.636569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.296 06:32:20 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:10.296 00:07:10.296 SPDK Configuration: 00:07:10.296 Core mask: 0x1 00:07:10.296 00:07:10.296 Accel Perf Configuration: 00:07:10.296 Workload Type: decompress 00:07:10.296 Transfer size: 111250 bytes 00:07:10.296 Vector count 1 00:07:10.296 Module: software 00:07:10.296 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:10.296 Queue depth: 32 00:07:10.296 Allocate depth: 32 00:07:10.296 # threads/core: 2 00:07:10.296 Run time: 1 seconds 00:07:10.296 Verify: Yes 00:07:10.296 00:07:10.296 Running for 1 seconds... 00:07:10.296 00:07:10.296 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:10.296 ------------------------------------------------------------------------------------ 00:07:10.296 0,1 2944/s 121 MiB/s 0 0 00:07:10.296 0,0 2880/s 118 MiB/s 0 0 00:07:10.296 ==================================================================================== 00:07:10.296 Total 5824/s 617 MiB/s 0 0' 00:07:10.296 06:32:20 -- accel/accel.sh@20 -- # IFS=: 00:07:10.296 06:32:20 -- accel/accel.sh@20 -- # read -r var val 00:07:10.296 06:32:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:10.297 06:32:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:10.297 06:32:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.297 06:32:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.297 06:32:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.297 06:32:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.297 06:32:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.297 06:32:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.297 06:32:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.297 06:32:20 -- accel/accel.sh@42 -- # jq -r . 00:07:10.297 [2024-11-28 06:32:20.861272] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:10.297 [2024-11-28 06:32:20.861393] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71287 ] 00:07:10.297 [2024-11-28 06:32:20.999386] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.297 [2024-11-28 06:32:21.047044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.555 06:32:21 -- accel/accel.sh@21 -- # val= 00:07:10.555 06:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.555 06:32:21 -- accel/accel.sh@21 -- # val= 00:07:10.555 06:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.555 06:32:21 -- accel/accel.sh@21 -- # val= 00:07:10.555 06:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.555 06:32:21 -- accel/accel.sh@21 -- # val=0x1 00:07:10.555 06:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.555 06:32:21 -- accel/accel.sh@21 -- # val= 00:07:10.555 06:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.555 06:32:21 -- accel/accel.sh@21 -- # val= 00:07:10.555 06:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.555 06:32:21 -- accel/accel.sh@21 -- # val=decompress 00:07:10.555 06:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.555 06:32:21 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.555 06:32:21 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:10.555 06:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.555 06:32:21 -- accel/accel.sh@21 -- # val= 00:07:10.555 06:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.555 06:32:21 -- accel/accel.sh@21 -- # val=software 00:07:10.555 06:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.555 06:32:21 -- accel/accel.sh@23 -- # accel_module=software 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.555 06:32:21 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:10.555 06:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.555 06:32:21 -- accel/accel.sh@21 -- # val=32 00:07:10.555 06:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.555 06:32:21 -- accel/accel.sh@21 -- # val=32 00:07:10.555 06:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.555 06:32:21 -- accel/accel.sh@21 -- # val=2 00:07:10.555 06:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.555 06:32:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:10.555 06:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.555 06:32:21 -- accel/accel.sh@21 -- # val=Yes 00:07:10.555 06:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.555 06:32:21 -- accel/accel.sh@21 -- # val= 00:07:10.555 06:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.555 06:32:21 -- accel/accel.sh@21 -- # val= 00:07:10.555 06:32:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.555 06:32:21 -- accel/accel.sh@20 -- # read -r var val 00:07:11.492 06:32:22 -- accel/accel.sh@21 -- # val= 00:07:11.492 06:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.492 06:32:22 -- accel/accel.sh@20 -- # IFS=: 00:07:11.492 06:32:22 -- accel/accel.sh@20 -- # read -r var val 00:07:11.492 06:32:22 -- accel/accel.sh@21 -- # val= 00:07:11.492 06:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.492 06:32:22 -- accel/accel.sh@20 -- # IFS=: 00:07:11.492 06:32:22 -- accel/accel.sh@20 -- # read -r var val 00:07:11.492 06:32:22 -- accel/accel.sh@21 -- # val= 00:07:11.492 06:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.492 06:32:22 -- accel/accel.sh@20 -- # IFS=: 00:07:11.492 06:32:22 -- accel/accel.sh@20 -- # read -r var val 00:07:11.492 06:32:22 -- accel/accel.sh@21 -- # val= 00:07:11.492 06:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.492 06:32:22 -- accel/accel.sh@20 -- # IFS=: 00:07:11.492 06:32:22 -- accel/accel.sh@20 -- # read -r var val 00:07:11.492 06:32:22 -- accel/accel.sh@21 -- # val= 00:07:11.492 06:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.492 06:32:22 -- accel/accel.sh@20 -- # IFS=: 00:07:11.492 06:32:22 -- accel/accel.sh@20 -- # read -r var val 00:07:11.492 06:32:22 -- accel/accel.sh@21 -- # val= 00:07:11.492 06:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.492 06:32:22 -- accel/accel.sh@20 -- # IFS=: 00:07:11.492 06:32:22 -- accel/accel.sh@20 -- # read -r var val 00:07:11.492 06:32:22 -- accel/accel.sh@21 -- # val= 00:07:11.492 06:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.492 06:32:22 -- accel/accel.sh@20 -- # IFS=: 00:07:11.492 06:32:22 -- accel/accel.sh@20 -- # read -r var val 00:07:11.492 06:32:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:11.492 06:32:22 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:11.492 ************************************ 00:07:11.492 END TEST accel_deomp_full_mthread 00:07:11.492 ************************************ 00:07:11.492 06:32:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.492 00:07:11.492 real 0m2.808s 00:07:11.492 user 0m2.376s 00:07:11.492 sys 0m0.230s 00:07:11.492 06:32:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:11.492 06:32:22 -- common/autotest_common.sh@10 -- # set +x 00:07:11.752 06:32:22 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:11.752 06:32:22 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:11.752 06:32:22 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:11.752 06:32:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:11.752 06:32:22 -- common/autotest_common.sh@10 -- # set +x 00:07:11.752 06:32:22 -- accel/accel.sh@129 -- # build_accel_config 00:07:11.752 06:32:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.752 06:32:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.752 06:32:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.752 06:32:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.752 06:32:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.752 06:32:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.752 06:32:22 -- accel/accel.sh@42 -- # jq -r . 00:07:11.752 ************************************ 00:07:11.752 START TEST accel_dif_functional_tests 00:07:11.752 ************************************ 00:07:11.752 06:32:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:11.752 [2024-11-28 06:32:22.357691] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:11.752 [2024-11-28 06:32:22.357816] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71323 ] 00:07:11.752 [2024-11-28 06:32:22.491776] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:12.010 [2024-11-28 06:32:22.552589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.010 [2024-11-28 06:32:22.552926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.010 [2024-11-28 06:32:22.552955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.010 00:07:12.010 00:07:12.010 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.010 http://cunit.sourceforge.net/ 00:07:12.010 00:07:12.010 00:07:12.010 Suite: accel_dif 00:07:12.010 Test: verify: DIF generated, GUARD check ...passed 00:07:12.010 Test: verify: DIF generated, APPTAG check ...passed 00:07:12.010 Test: verify: DIF generated, REFTAG check ...passed 00:07:12.010 Test: verify: DIF not generated, GUARD check ...passed 00:07:12.010 Test: verify: DIF not generated, APPTAG check ...passed 00:07:12.010 Test: verify: DIF not generated, REFTAG check ...[2024-11-28 06:32:22.624537] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:12.010 [2024-11-28 06:32:22.624586] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:12.010 [2024-11-28 06:32:22.624630] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:12.010 [2024-11-28 06:32:22.624674] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:12.010 [2024-11-28 06:32:22.624701] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:12.010 passed 00:07:12.010 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:12.010 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:12.010 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:12.010 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:12.010 Test: verify: REFTAG_INIT correct, REFTAG check ...[2024-11-28 06:32:22.624751] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:12.010 [2024-11-28 06:32:22.624907] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:12.010 passed 00:07:12.010 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:12.010 Test: generate copy: DIF generated, GUARD check ...passed 00:07:12.010 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:12.010 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:12.010 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:12.010 Test: generate copy: DIF generated, no APPTAG check flag set ...[2024-11-28 06:32:22.625118] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:12.010 passed 00:07:12.010 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:12.010 Test: generate copy: iovecs-len validate ...passed 00:07:12.010 Test: generate copy: buffer alignment validate ...[2024-11-28 06:32:22.625450] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:12.010 passed 00:07:12.010 00:07:12.010 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.010 suites 1 1 n/a 0 0 00:07:12.010 tests 20 20 20 0 0 00:07:12.010 asserts 204 204 204 0 n/a 00:07:12.010 00:07:12.010 Elapsed time = 0.003 seconds 00:07:12.268 ************************************ 00:07:12.268 END TEST accel_dif_functional_tests 00:07:12.268 ************************************ 00:07:12.268 00:07:12.268 real 0m0.493s 00:07:12.268 user 0m0.547s 00:07:12.268 sys 0m0.169s 00:07:12.268 06:32:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:12.268 06:32:22 -- common/autotest_common.sh@10 -- # set +x 00:07:12.268 00:07:12.268 real 0m58.805s 00:07:12.268 user 1m2.544s 00:07:12.268 sys 0m5.892s 00:07:12.268 06:32:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:12.268 06:32:22 -- common/autotest_common.sh@10 -- # set +x 00:07:12.268 ************************************ 00:07:12.268 END TEST accel 00:07:12.268 ************************************ 00:07:12.268 06:32:22 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:12.268 06:32:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:12.268 06:32:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.268 06:32:22 -- common/autotest_common.sh@10 -- # set +x 00:07:12.268 ************************************ 00:07:12.268 START TEST accel_rpc 00:07:12.268 ************************************ 00:07:12.268 06:32:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:12.268 * Looking for test storage... 00:07:12.268 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:12.268 06:32:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:12.268 06:32:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:12.268 06:32:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:12.558 06:32:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:12.558 06:32:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:12.558 06:32:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:12.558 06:32:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:12.558 06:32:23 -- scripts/common.sh@335 -- # IFS=.-: 00:07:12.558 06:32:23 -- scripts/common.sh@335 -- # read -ra ver1 00:07:12.558 06:32:23 -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.558 06:32:23 -- scripts/common.sh@336 -- # read -ra ver2 00:07:12.558 06:32:23 -- scripts/common.sh@337 -- # local 'op=<' 00:07:12.558 06:32:23 -- scripts/common.sh@339 -- # ver1_l=2 00:07:12.558 06:32:23 -- scripts/common.sh@340 -- # ver2_l=1 00:07:12.558 06:32:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:12.558 06:32:23 -- scripts/common.sh@343 -- # case "$op" in 00:07:12.558 06:32:23 -- scripts/common.sh@344 -- # : 1 00:07:12.558 06:32:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:12.558 06:32:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.558 06:32:23 -- scripts/common.sh@364 -- # decimal 1 00:07:12.558 06:32:23 -- scripts/common.sh@352 -- # local d=1 00:07:12.558 06:32:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.558 06:32:23 -- scripts/common.sh@354 -- # echo 1 00:07:12.558 06:32:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:12.558 06:32:23 -- scripts/common.sh@365 -- # decimal 2 00:07:12.558 06:32:23 -- scripts/common.sh@352 -- # local d=2 00:07:12.558 06:32:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.558 06:32:23 -- scripts/common.sh@354 -- # echo 2 00:07:12.558 06:32:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:12.558 06:32:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:12.558 06:32:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:12.558 06:32:23 -- scripts/common.sh@367 -- # return 0 00:07:12.558 06:32:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.558 06:32:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:12.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.558 --rc genhtml_branch_coverage=1 00:07:12.558 --rc genhtml_function_coverage=1 00:07:12.558 --rc genhtml_legend=1 00:07:12.558 --rc geninfo_all_blocks=1 00:07:12.558 --rc geninfo_unexecuted_blocks=1 00:07:12.558 00:07:12.558 ' 00:07:12.558 06:32:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:12.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.558 --rc genhtml_branch_coverage=1 00:07:12.558 --rc genhtml_function_coverage=1 00:07:12.558 --rc genhtml_legend=1 00:07:12.558 --rc geninfo_all_blocks=1 00:07:12.558 --rc geninfo_unexecuted_blocks=1 00:07:12.558 00:07:12.558 ' 00:07:12.558 06:32:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:12.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.558 --rc genhtml_branch_coverage=1 00:07:12.558 --rc genhtml_function_coverage=1 00:07:12.558 --rc genhtml_legend=1 00:07:12.558 --rc geninfo_all_blocks=1 00:07:12.558 --rc geninfo_unexecuted_blocks=1 00:07:12.558 00:07:12.558 ' 00:07:12.558 06:32:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:12.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.558 --rc genhtml_branch_coverage=1 00:07:12.558 --rc genhtml_function_coverage=1 00:07:12.558 --rc genhtml_legend=1 00:07:12.558 --rc geninfo_all_blocks=1 00:07:12.558 --rc geninfo_unexecuted_blocks=1 00:07:12.558 00:07:12.558 ' 00:07:12.558 06:32:23 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:12.558 06:32:23 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=71396 00:07:12.558 06:32:23 -- accel/accel_rpc.sh@15 -- # waitforlisten 71396 00:07:12.558 06:32:23 -- common/autotest_common.sh@829 -- # '[' -z 71396 ']' 00:07:12.558 06:32:23 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:12.558 06:32:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.558 06:32:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:12.558 06:32:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.558 06:32:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:12.558 06:32:23 -- common/autotest_common.sh@10 -- # set +x 00:07:12.558 [2024-11-28 06:32:23.131457] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:12.558 [2024-11-28 06:32:23.131849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71396 ] 00:07:12.558 [2024-11-28 06:32:23.267486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.820 [2024-11-28 06:32:23.327777] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:12.820 [2024-11-28 06:32:23.327971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.436 06:32:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:13.436 06:32:23 -- common/autotest_common.sh@862 -- # return 0 00:07:13.436 06:32:23 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:13.436 06:32:23 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:13.436 06:32:23 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:13.436 06:32:23 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:13.436 06:32:23 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:13.436 06:32:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:13.436 06:32:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.436 06:32:23 -- common/autotest_common.sh@10 -- # set +x 00:07:13.436 ************************************ 00:07:13.436 START TEST accel_assign_opcode 00:07:13.436 ************************************ 00:07:13.436 06:32:23 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:13.436 06:32:23 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:13.436 06:32:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.436 06:32:23 -- common/autotest_common.sh@10 -- # set +x 00:07:13.436 [2024-11-28 06:32:23.976620] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:13.436 06:32:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.436 06:32:23 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:13.436 06:32:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.436 06:32:23 -- common/autotest_common.sh@10 -- # set +x 00:07:13.436 [2024-11-28 06:32:23.984592] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:13.436 06:32:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.436 06:32:23 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:13.436 06:32:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.436 06:32:23 -- common/autotest_common.sh@10 -- # set +x 00:07:13.694 06:32:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.694 06:32:24 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:13.694 06:32:24 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:13.694 06:32:24 -- accel/accel_rpc.sh@42 -- # grep software 00:07:13.694 06:32:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.694 06:32:24 -- common/autotest_common.sh@10 -- # set +x 00:07:13.694 06:32:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.694 software 00:07:13.694 ************************************ 00:07:13.694 END TEST accel_assign_opcode 00:07:13.694 ************************************ 00:07:13.694 00:07:13.694 real 0m0.217s 00:07:13.694 user 0m0.035s 00:07:13.694 sys 0m0.011s 00:07:13.694 06:32:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.694 06:32:24 -- common/autotest_common.sh@10 -- # set +x 00:07:13.694 06:32:24 -- accel/accel_rpc.sh@55 -- # killprocess 71396 00:07:13.694 06:32:24 -- common/autotest_common.sh@936 -- # '[' -z 71396 ']' 00:07:13.694 06:32:24 -- common/autotest_common.sh@940 -- # kill -0 71396 00:07:13.694 06:32:24 -- common/autotest_common.sh@941 -- # uname 00:07:13.694 06:32:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:13.694 06:32:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71396 00:07:13.694 killing process with pid 71396 00:07:13.694 06:32:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:13.694 06:32:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:13.694 06:32:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71396' 00:07:13.694 06:32:24 -- common/autotest_common.sh@955 -- # kill 71396 00:07:13.694 06:32:24 -- common/autotest_common.sh@960 -- # wait 71396 00:07:13.954 ************************************ 00:07:13.954 END TEST accel_rpc 00:07:13.954 ************************************ 00:07:13.954 00:07:13.954 real 0m1.642s 00:07:13.954 user 0m1.599s 00:07:13.954 sys 0m0.417s 00:07:13.954 06:32:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.954 06:32:24 -- common/autotest_common.sh@10 -- # set +x 00:07:13.954 06:32:24 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:13.954 06:32:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:13.954 06:32:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.955 06:32:24 -- common/autotest_common.sh@10 -- # set +x 00:07:13.955 ************************************ 00:07:13.955 START TEST app_cmdline 00:07:13.955 ************************************ 00:07:13.955 06:32:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:13.955 * Looking for test storage... 00:07:13.955 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:13.955 06:32:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:13.955 06:32:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:13.955 06:32:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:14.215 06:32:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:14.215 06:32:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:14.215 06:32:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:14.215 06:32:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:14.215 06:32:24 -- scripts/common.sh@335 -- # IFS=.-: 00:07:14.215 06:32:24 -- scripts/common.sh@335 -- # read -ra ver1 00:07:14.215 06:32:24 -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.215 06:32:24 -- scripts/common.sh@336 -- # read -ra ver2 00:07:14.215 06:32:24 -- scripts/common.sh@337 -- # local 'op=<' 00:07:14.215 06:32:24 -- scripts/common.sh@339 -- # ver1_l=2 00:07:14.215 06:32:24 -- scripts/common.sh@340 -- # ver2_l=1 00:07:14.215 06:32:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:14.215 06:32:24 -- scripts/common.sh@343 -- # case "$op" in 00:07:14.215 06:32:24 -- scripts/common.sh@344 -- # : 1 00:07:14.215 06:32:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:14.215 06:32:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:14.215 06:32:24 -- scripts/common.sh@364 -- # decimal 1 00:07:14.215 06:32:24 -- scripts/common.sh@352 -- # local d=1 00:07:14.215 06:32:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:14.215 06:32:24 -- scripts/common.sh@354 -- # echo 1 00:07:14.215 06:32:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:14.215 06:32:24 -- scripts/common.sh@365 -- # decimal 2 00:07:14.215 06:32:24 -- scripts/common.sh@352 -- # local d=2 00:07:14.215 06:32:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:14.215 06:32:24 -- scripts/common.sh@354 -- # echo 2 00:07:14.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.216 06:32:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:14.216 06:32:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:14.216 06:32:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:14.216 06:32:24 -- scripts/common.sh@367 -- # return 0 00:07:14.216 06:32:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:14.216 06:32:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:14.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.216 --rc genhtml_branch_coverage=1 00:07:14.216 --rc genhtml_function_coverage=1 00:07:14.216 --rc genhtml_legend=1 00:07:14.216 --rc geninfo_all_blocks=1 00:07:14.216 --rc geninfo_unexecuted_blocks=1 00:07:14.216 00:07:14.216 ' 00:07:14.216 06:32:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:14.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.216 --rc genhtml_branch_coverage=1 00:07:14.216 --rc genhtml_function_coverage=1 00:07:14.216 --rc genhtml_legend=1 00:07:14.216 --rc geninfo_all_blocks=1 00:07:14.216 --rc geninfo_unexecuted_blocks=1 00:07:14.216 00:07:14.216 ' 00:07:14.216 06:32:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:14.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.216 --rc genhtml_branch_coverage=1 00:07:14.216 --rc genhtml_function_coverage=1 00:07:14.216 --rc genhtml_legend=1 00:07:14.216 --rc geninfo_all_blocks=1 00:07:14.216 --rc geninfo_unexecuted_blocks=1 00:07:14.216 00:07:14.216 ' 00:07:14.216 06:32:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:14.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.216 --rc genhtml_branch_coverage=1 00:07:14.216 --rc genhtml_function_coverage=1 00:07:14.216 --rc genhtml_legend=1 00:07:14.216 --rc geninfo_all_blocks=1 00:07:14.216 --rc geninfo_unexecuted_blocks=1 00:07:14.216 00:07:14.216 ' 00:07:14.216 06:32:24 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:14.216 06:32:24 -- app/cmdline.sh@17 -- # spdk_tgt_pid=71497 00:07:14.216 06:32:24 -- app/cmdline.sh@18 -- # waitforlisten 71497 00:07:14.216 06:32:24 -- common/autotest_common.sh@829 -- # '[' -z 71497 ']' 00:07:14.216 06:32:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.216 06:32:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:14.216 06:32:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.216 06:32:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:14.216 06:32:24 -- common/autotest_common.sh@10 -- # set +x 00:07:14.216 06:32:24 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:14.216 [2024-11-28 06:32:24.823583] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:14.216 [2024-11-28 06:32:24.823740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71497 ] 00:07:14.216 [2024-11-28 06:32:24.962003] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.477 [2024-11-28 06:32:25.024615] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:14.477 [2024-11-28 06:32:25.024890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.045 06:32:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.045 06:32:25 -- common/autotest_common.sh@862 -- # return 0 00:07:15.045 06:32:25 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:15.303 { 00:07:15.303 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:07:15.303 "fields": { 00:07:15.303 "major": 24, 00:07:15.303 "minor": 1, 00:07:15.303 "patch": 1, 00:07:15.303 "suffix": "-pre", 00:07:15.303 "commit": "c13c99a5e" 00:07:15.303 } 00:07:15.303 } 00:07:15.303 06:32:25 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:15.303 06:32:25 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:15.303 06:32:25 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:15.303 06:32:25 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:15.303 06:32:25 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:15.303 06:32:25 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:15.303 06:32:25 -- app/cmdline.sh@26 -- # sort 00:07:15.303 06:32:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.303 06:32:25 -- common/autotest_common.sh@10 -- # set +x 00:07:15.303 06:32:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.303 06:32:25 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:15.303 06:32:25 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:15.303 06:32:25 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:15.303 06:32:25 -- common/autotest_common.sh@650 -- # local es=0 00:07:15.303 06:32:25 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:15.303 06:32:25 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:15.303 06:32:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.303 06:32:25 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:15.303 06:32:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.303 06:32:25 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:15.303 06:32:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.303 06:32:25 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:15.303 06:32:25 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:15.303 06:32:25 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:15.303 request: 00:07:15.303 { 00:07:15.303 "method": "env_dpdk_get_mem_stats", 00:07:15.303 "req_id": 1 00:07:15.303 } 00:07:15.303 Got JSON-RPC error response 00:07:15.303 response: 00:07:15.303 { 00:07:15.303 "code": -32601, 00:07:15.303 "message": "Method not found" 00:07:15.303 } 00:07:15.303 06:32:26 -- common/autotest_common.sh@653 -- # es=1 00:07:15.303 06:32:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:15.303 06:32:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:15.303 06:32:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:15.303 06:32:26 -- app/cmdline.sh@1 -- # killprocess 71497 00:07:15.303 06:32:26 -- common/autotest_common.sh@936 -- # '[' -z 71497 ']' 00:07:15.303 06:32:26 -- common/autotest_common.sh@940 -- # kill -0 71497 00:07:15.303 06:32:26 -- common/autotest_common.sh@941 -- # uname 00:07:15.562 06:32:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:15.562 06:32:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71497 00:07:15.562 killing process with pid 71497 00:07:15.562 06:32:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:15.562 06:32:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:15.562 06:32:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71497' 00:07:15.562 06:32:26 -- common/autotest_common.sh@955 -- # kill 71497 00:07:15.562 06:32:26 -- common/autotest_common.sh@960 -- # wait 71497 00:07:15.823 ************************************ 00:07:15.823 END TEST app_cmdline 00:07:15.823 ************************************ 00:07:15.823 00:07:15.823 real 0m1.816s 00:07:15.823 user 0m1.967s 00:07:15.823 sys 0m0.569s 00:07:15.823 06:32:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:15.823 06:32:26 -- common/autotest_common.sh@10 -- # set +x 00:07:15.823 06:32:26 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:15.823 06:32:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:15.823 06:32:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.823 06:32:26 -- common/autotest_common.sh@10 -- # set +x 00:07:15.823 ************************************ 00:07:15.823 START TEST version 00:07:15.823 ************************************ 00:07:15.823 06:32:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:15.823 * Looking for test storage... 00:07:15.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:15.823 06:32:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:15.823 06:32:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:15.823 06:32:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:16.084 06:32:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:16.084 06:32:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:16.084 06:32:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:16.084 06:32:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:16.084 06:32:26 -- scripts/common.sh@335 -- # IFS=.-: 00:07:16.084 06:32:26 -- scripts/common.sh@335 -- # read -ra ver1 00:07:16.084 06:32:26 -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.084 06:32:26 -- scripts/common.sh@336 -- # read -ra ver2 00:07:16.084 06:32:26 -- scripts/common.sh@337 -- # local 'op=<' 00:07:16.084 06:32:26 -- scripts/common.sh@339 -- # ver1_l=2 00:07:16.084 06:32:26 -- scripts/common.sh@340 -- # ver2_l=1 00:07:16.084 06:32:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:16.084 06:32:26 -- scripts/common.sh@343 -- # case "$op" in 00:07:16.084 06:32:26 -- scripts/common.sh@344 -- # : 1 00:07:16.084 06:32:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:16.084 06:32:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.084 06:32:26 -- scripts/common.sh@364 -- # decimal 1 00:07:16.084 06:32:26 -- scripts/common.sh@352 -- # local d=1 00:07:16.084 06:32:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.084 06:32:26 -- scripts/common.sh@354 -- # echo 1 00:07:16.084 06:32:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:16.084 06:32:26 -- scripts/common.sh@365 -- # decimal 2 00:07:16.084 06:32:26 -- scripts/common.sh@352 -- # local d=2 00:07:16.084 06:32:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.084 06:32:26 -- scripts/common.sh@354 -- # echo 2 00:07:16.084 06:32:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:16.084 06:32:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:16.084 06:32:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:16.084 06:32:26 -- scripts/common.sh@367 -- # return 0 00:07:16.084 06:32:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.084 06:32:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:16.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.084 --rc genhtml_branch_coverage=1 00:07:16.084 --rc genhtml_function_coverage=1 00:07:16.084 --rc genhtml_legend=1 00:07:16.084 --rc geninfo_all_blocks=1 00:07:16.084 --rc geninfo_unexecuted_blocks=1 00:07:16.084 00:07:16.084 ' 00:07:16.084 06:32:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:16.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.084 --rc genhtml_branch_coverage=1 00:07:16.084 --rc genhtml_function_coverage=1 00:07:16.084 --rc genhtml_legend=1 00:07:16.084 --rc geninfo_all_blocks=1 00:07:16.084 --rc geninfo_unexecuted_blocks=1 00:07:16.084 00:07:16.084 ' 00:07:16.084 06:32:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:16.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.084 --rc genhtml_branch_coverage=1 00:07:16.084 --rc genhtml_function_coverage=1 00:07:16.084 --rc genhtml_legend=1 00:07:16.084 --rc geninfo_all_blocks=1 00:07:16.084 --rc geninfo_unexecuted_blocks=1 00:07:16.084 00:07:16.084 ' 00:07:16.084 06:32:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:16.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.084 --rc genhtml_branch_coverage=1 00:07:16.084 --rc genhtml_function_coverage=1 00:07:16.084 --rc genhtml_legend=1 00:07:16.084 --rc geninfo_all_blocks=1 00:07:16.084 --rc geninfo_unexecuted_blocks=1 00:07:16.084 00:07:16.084 ' 00:07:16.084 06:32:26 -- app/version.sh@17 -- # get_header_version major 00:07:16.084 06:32:26 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:16.084 06:32:26 -- app/version.sh@14 -- # cut -f2 00:07:16.084 06:32:26 -- app/version.sh@14 -- # tr -d '"' 00:07:16.084 06:32:26 -- app/version.sh@17 -- # major=24 00:07:16.084 06:32:26 -- app/version.sh@18 -- # get_header_version minor 00:07:16.084 06:32:26 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:16.084 06:32:26 -- app/version.sh@14 -- # tr -d '"' 00:07:16.085 06:32:26 -- app/version.sh@14 -- # cut -f2 00:07:16.085 06:32:26 -- app/version.sh@18 -- # minor=1 00:07:16.085 06:32:26 -- app/version.sh@19 -- # get_header_version patch 00:07:16.085 06:32:26 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:16.085 06:32:26 -- app/version.sh@14 -- # tr -d '"' 00:07:16.085 06:32:26 -- app/version.sh@14 -- # cut -f2 00:07:16.085 06:32:26 -- app/version.sh@19 -- # patch=1 00:07:16.085 06:32:26 -- app/version.sh@20 -- # get_header_version suffix 00:07:16.085 06:32:26 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:16.085 06:32:26 -- app/version.sh@14 -- # tr -d '"' 00:07:16.085 06:32:26 -- app/version.sh@14 -- # cut -f2 00:07:16.085 06:32:26 -- app/version.sh@20 -- # suffix=-pre 00:07:16.085 06:32:26 -- app/version.sh@22 -- # version=24.1 00:07:16.085 06:32:26 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:16.085 06:32:26 -- app/version.sh@25 -- # version=24.1.1 00:07:16.085 06:32:26 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:16.085 06:32:26 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:16.085 06:32:26 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:16.085 06:32:26 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:16.085 06:32:26 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:16.085 00:07:16.085 real 0m0.206s 00:07:16.085 user 0m0.119s 00:07:16.085 sys 0m0.111s 00:07:16.085 ************************************ 00:07:16.085 END TEST version 00:07:16.085 ************************************ 00:07:16.085 06:32:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.085 06:32:26 -- common/autotest_common.sh@10 -- # set +x 00:07:16.085 06:32:26 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:16.085 06:32:26 -- spdk/autotest.sh@191 -- # uname -s 00:07:16.085 06:32:26 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:16.085 06:32:26 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:16.085 06:32:26 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:16.085 06:32:26 -- spdk/autotest.sh@204 -- # '[' 1 -eq 1 ']' 00:07:16.085 06:32:26 -- spdk/autotest.sh@205 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:16.085 06:32:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:16.085 06:32:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.085 06:32:26 -- common/autotest_common.sh@10 -- # set +x 00:07:16.085 ************************************ 00:07:16.085 START TEST blockdev_nvme 00:07:16.085 ************************************ 00:07:16.085 06:32:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:16.085 * Looking for test storage... 00:07:16.085 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:16.085 06:32:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:16.085 06:32:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:16.085 06:32:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:16.085 06:32:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:16.085 06:32:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:16.085 06:32:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:16.085 06:32:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:16.085 06:32:26 -- scripts/common.sh@335 -- # IFS=.-: 00:07:16.085 06:32:26 -- scripts/common.sh@335 -- # read -ra ver1 00:07:16.085 06:32:26 -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.085 06:32:26 -- scripts/common.sh@336 -- # read -ra ver2 00:07:16.085 06:32:26 -- scripts/common.sh@337 -- # local 'op=<' 00:07:16.085 06:32:26 -- scripts/common.sh@339 -- # ver1_l=2 00:07:16.085 06:32:26 -- scripts/common.sh@340 -- # ver2_l=1 00:07:16.085 06:32:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:16.085 06:32:26 -- scripts/common.sh@343 -- # case "$op" in 00:07:16.085 06:32:26 -- scripts/common.sh@344 -- # : 1 00:07:16.085 06:32:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:16.085 06:32:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.085 06:32:26 -- scripts/common.sh@364 -- # decimal 1 00:07:16.085 06:32:26 -- scripts/common.sh@352 -- # local d=1 00:07:16.085 06:32:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.085 06:32:26 -- scripts/common.sh@354 -- # echo 1 00:07:16.085 06:32:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:16.085 06:32:26 -- scripts/common.sh@365 -- # decimal 2 00:07:16.085 06:32:26 -- scripts/common.sh@352 -- # local d=2 00:07:16.085 06:32:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.085 06:32:26 -- scripts/common.sh@354 -- # echo 2 00:07:16.346 06:32:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:16.346 06:32:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:16.346 06:32:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:16.346 06:32:26 -- scripts/common.sh@367 -- # return 0 00:07:16.346 06:32:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.346 06:32:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:16.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.346 --rc genhtml_branch_coverage=1 00:07:16.346 --rc genhtml_function_coverage=1 00:07:16.346 --rc genhtml_legend=1 00:07:16.346 --rc geninfo_all_blocks=1 00:07:16.346 --rc geninfo_unexecuted_blocks=1 00:07:16.346 00:07:16.346 ' 00:07:16.346 06:32:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:16.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.346 --rc genhtml_branch_coverage=1 00:07:16.346 --rc genhtml_function_coverage=1 00:07:16.346 --rc genhtml_legend=1 00:07:16.346 --rc geninfo_all_blocks=1 00:07:16.346 --rc geninfo_unexecuted_blocks=1 00:07:16.346 00:07:16.346 ' 00:07:16.346 06:32:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:16.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.346 --rc genhtml_branch_coverage=1 00:07:16.346 --rc genhtml_function_coverage=1 00:07:16.346 --rc genhtml_legend=1 00:07:16.346 --rc geninfo_all_blocks=1 00:07:16.346 --rc geninfo_unexecuted_blocks=1 00:07:16.346 00:07:16.346 ' 00:07:16.346 06:32:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:16.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.346 --rc genhtml_branch_coverage=1 00:07:16.346 --rc genhtml_function_coverage=1 00:07:16.346 --rc genhtml_legend=1 00:07:16.346 --rc geninfo_all_blocks=1 00:07:16.346 --rc geninfo_unexecuted_blocks=1 00:07:16.346 00:07:16.346 ' 00:07:16.346 06:32:26 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:16.346 06:32:26 -- bdev/nbd_common.sh@6 -- # set -e 00:07:16.346 06:32:26 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:16.346 06:32:26 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:16.346 06:32:26 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:16.346 06:32:26 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:16.346 06:32:26 -- bdev/blockdev.sh@18 -- # : 00:07:16.346 06:32:26 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:07:16.346 06:32:26 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:07:16.346 06:32:26 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:07:16.346 06:32:26 -- bdev/blockdev.sh@672 -- # uname -s 00:07:16.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.346 06:32:26 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:07:16.346 06:32:26 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:07:16.346 06:32:26 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:07:16.346 06:32:26 -- bdev/blockdev.sh@681 -- # crypto_device= 00:07:16.346 06:32:26 -- bdev/blockdev.sh@682 -- # dek= 00:07:16.346 06:32:26 -- bdev/blockdev.sh@683 -- # env_ctx= 00:07:16.346 06:32:26 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:07:16.346 06:32:26 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:07:16.346 06:32:26 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:07:16.346 06:32:26 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:07:16.346 06:32:26 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:07:16.346 06:32:26 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=71656 00:07:16.346 06:32:26 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:16.346 06:32:26 -- bdev/blockdev.sh@47 -- # waitforlisten 71656 00:07:16.346 06:32:26 -- common/autotest_common.sh@829 -- # '[' -z 71656 ']' 00:07:16.346 06:32:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.346 06:32:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:16.346 06:32:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.346 06:32:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:16.346 06:32:26 -- common/autotest_common.sh@10 -- # set +x 00:07:16.346 06:32:26 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:16.346 [2024-11-28 06:32:26.926613] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:16.346 [2024-11-28 06:32:26.926737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71656 ] 00:07:16.346 [2024-11-28 06:32:27.062905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.346 [2024-11-28 06:32:27.103393] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:16.346 [2024-11-28 06:32:27.103603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.288 06:32:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:17.288 06:32:27 -- common/autotest_common.sh@862 -- # return 0 00:07:17.288 06:32:27 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:07:17.288 06:32:27 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:07:17.288 06:32:27 -- bdev/blockdev.sh@79 -- # local json 00:07:17.288 06:32:27 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:07:17.288 06:32:27 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:17.289 06:32:27 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:07:17.289 06:32:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.289 06:32:27 -- common/autotest_common.sh@10 -- # set +x 00:07:17.289 06:32:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.289 06:32:28 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:07:17.289 06:32:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.289 06:32:28 -- common/autotest_common.sh@10 -- # set +x 00:07:17.289 06:32:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.289 06:32:28 -- bdev/blockdev.sh@738 -- # cat 00:07:17.289 06:32:28 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:07:17.289 06:32:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.289 06:32:28 -- common/autotest_common.sh@10 -- # set +x 00:07:17.550 06:32:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.550 06:32:28 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:07:17.550 06:32:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.550 06:32:28 -- common/autotest_common.sh@10 -- # set +x 00:07:17.550 06:32:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.550 06:32:28 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:17.550 06:32:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.550 06:32:28 -- common/autotest_common.sh@10 -- # set +x 00:07:17.550 06:32:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.550 06:32:28 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:07:17.550 06:32:28 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:07:17.550 06:32:28 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:07:17.550 06:32:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.550 06:32:28 -- common/autotest_common.sh@10 -- # set +x 00:07:17.550 06:32:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.550 06:32:28 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:07:17.550 06:32:28 -- bdev/blockdev.sh@747 -- # jq -r .name 00:07:17.551 06:32:28 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "10b4dc92-1258-49e3-acbf-b12c46a69664"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "10b4dc92-1258-49e3-acbf-b12c46a69664",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "2a98d194-4d88-4f54-9ed2-49ea1859f1de"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "2a98d194-4d88-4f54-9ed2-49ea1859f1de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "e5acd202-5f37-4272-b4ca-f18b73518eb1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e5acd202-5f37-4272-b4ca-f18b73518eb1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "c3383e70-f32f-4806-b619-5b1d51f1413e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c3383e70-f32f-4806-b619-5b1d51f1413e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "4faf07e7-e7e5-4858-9661-436957b3d11f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4faf07e7-e7e5-4858-9661-436957b3d11f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "235c1258-b25d-4fc9-8cfe-13d7d48f917c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "235c1258-b25d-4fc9-8cfe-13d7d48f917c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:17.551 06:32:28 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:07:17.551 06:32:28 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:07:17.551 06:32:28 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:07:17.551 06:32:28 -- bdev/blockdev.sh@752 -- # killprocess 71656 00:07:17.551 06:32:28 -- common/autotest_common.sh@936 -- # '[' -z 71656 ']' 00:07:17.551 06:32:28 -- common/autotest_common.sh@940 -- # kill -0 71656 00:07:17.551 06:32:28 -- common/autotest_common.sh@941 -- # uname 00:07:17.551 06:32:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:17.551 06:32:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71656 00:07:17.551 killing process with pid 71656 00:07:17.551 06:32:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:17.551 06:32:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:17.551 06:32:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71656' 00:07:17.551 06:32:28 -- common/autotest_common.sh@955 -- # kill 71656 00:07:17.551 06:32:28 -- common/autotest_common.sh@960 -- # wait 71656 00:07:17.812 06:32:28 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:17.812 06:32:28 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:17.812 06:32:28 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:17.812 06:32:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.812 06:32:28 -- common/autotest_common.sh@10 -- # set +x 00:07:17.812 ************************************ 00:07:17.812 START TEST bdev_hello_world 00:07:17.812 ************************************ 00:07:17.812 06:32:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:18.072 [2024-11-28 06:32:28.594445] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:18.072 [2024-11-28 06:32:28.594569] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71723 ] 00:07:18.072 [2024-11-28 06:32:28.722936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.072 [2024-11-28 06:32:28.763590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.646 [2024-11-28 06:32:29.124138] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:18.646 [2024-11-28 06:32:29.124200] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:18.646 [2024-11-28 06:32:29.124240] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:18.646 [2024-11-28 06:32:29.126334] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:18.646 [2024-11-28 06:32:29.126800] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:18.646 [2024-11-28 06:32:29.126838] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:18.646 [2024-11-28 06:32:29.127032] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:18.646 00:07:18.646 [2024-11-28 06:32:29.127113] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:18.646 00:07:18.646 real 0m0.785s 00:07:18.646 user 0m0.511s 00:07:18.646 sys 0m0.171s 00:07:18.646 06:32:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:18.646 ************************************ 00:07:18.646 END TEST bdev_hello_world 00:07:18.646 ************************************ 00:07:18.646 06:32:29 -- common/autotest_common.sh@10 -- # set +x 00:07:18.646 06:32:29 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:07:18.646 06:32:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:18.646 06:32:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.646 06:32:29 -- common/autotest_common.sh@10 -- # set +x 00:07:18.646 ************************************ 00:07:18.646 START TEST bdev_bounds 00:07:18.646 ************************************ 00:07:18.646 06:32:29 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:07:18.646 06:32:29 -- bdev/blockdev.sh@288 -- # bdevio_pid=71749 00:07:18.646 Process bdevio pid: 71749 00:07:18.646 06:32:29 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:18.646 06:32:29 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:18.646 06:32:29 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 71749' 00:07:18.646 06:32:29 -- bdev/blockdev.sh@291 -- # waitforlisten 71749 00:07:18.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.646 06:32:29 -- common/autotest_common.sh@829 -- # '[' -z 71749 ']' 00:07:18.646 06:32:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.646 06:32:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.646 06:32:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.646 06:32:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.646 06:32:29 -- common/autotest_common.sh@10 -- # set +x 00:07:18.909 [2024-11-28 06:32:29.421409] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:18.909 [2024-11-28 06:32:29.421634] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71749 ] 00:07:18.909 [2024-11-28 06:32:29.555663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.909 [2024-11-28 06:32:29.587986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.909 [2024-11-28 06:32:29.588334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.909 [2024-11-28 06:32:29.588385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.482 06:32:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.482 06:32:30 -- common/autotest_common.sh@862 -- # return 0 00:07:19.482 06:32:30 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:19.745 I/O targets: 00:07:19.745 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:19.745 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:19.745 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:19.745 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:19.745 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:19.745 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:19.745 00:07:19.745 00:07:19.745 CUnit - A unit testing framework for C - Version 2.1-3 00:07:19.745 http://cunit.sourceforge.net/ 00:07:19.745 00:07:19.745 00:07:19.745 Suite: bdevio tests on: Nvme3n1 00:07:19.745 Test: blockdev write read block ...passed 00:07:19.745 Test: blockdev write zeroes read block ...passed 00:07:19.745 Test: blockdev write zeroes read no split ...passed 00:07:19.745 Test: blockdev write zeroes read split ...passed 00:07:19.745 Test: blockdev write zeroes read split partial ...passed 00:07:19.745 Test: blockdev reset ...[2024-11-28 06:32:30.340896] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:07:19.745 passed 00:07:19.745 Test: blockdev write read 8 blocks ...[2024-11-28 06:32:30.342845] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:19.745 passed 00:07:19.745 Test: blockdev write read size > 128k ...passed 00:07:19.745 Test: blockdev write read invalid size ...passed 00:07:19.745 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:19.745 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:19.745 Test: blockdev write read max offset ...passed 00:07:19.745 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:19.745 Test: blockdev writev readv 8 blocks ...passed 00:07:19.745 Test: blockdev writev readv 30 x 1block ...passed 00:07:19.745 Test: blockdev writev readv block ...passed 00:07:19.745 Test: blockdev writev readv size > 128k ...passed 00:07:19.745 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:19.745 Test: blockdev comparev and writev ...[2024-11-28 06:32:30.347488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2aee0e000 len:0x1000 00:07:19.745 [2024-11-28 06:32:30.347541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:19.745 passed 00:07:19.745 Test: blockdev nvme passthru rw ...passed 00:07:19.745 Test: blockdev nvme passthru vendor specific ...[2024-11-28 06:32:30.348021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:19.745 passed 00:07:19.745 Test: blockdev nvme admin passthru ...[2024-11-28 06:32:30.348046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:19.745 passed 00:07:19.745 Test: blockdev copy ...passed 00:07:19.745 Suite: bdevio tests on: Nvme2n3 00:07:19.745 Test: blockdev write read block ...passed 00:07:19.745 Test: blockdev write zeroes read block ...passed 00:07:19.745 Test: blockdev write zeroes read no split ...passed 00:07:19.745 Test: blockdev write zeroes read split ...passed 00:07:19.745 Test: blockdev write zeroes read split partial ...passed 00:07:19.745 Test: blockdev reset ...[2024-11-28 06:32:30.362125] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:19.745 [2024-11-28 06:32:30.363987] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:19.745 passed 00:07:19.745 Test: blockdev write read 8 blocks ...passed 00:07:19.745 Test: blockdev write read size > 128k ...passed 00:07:19.745 Test: blockdev write read invalid size ...passed 00:07:19.745 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:19.745 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:19.745 Test: blockdev write read max offset ...passed 00:07:19.745 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:19.745 Test: blockdev writev readv 8 blocks ...passed 00:07:19.745 Test: blockdev writev readv 30 x 1block ...passed 00:07:19.745 Test: blockdev writev readv block ...passed 00:07:19.745 Test: blockdev writev readv size > 128k ...passed 00:07:19.745 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:19.745 Test: blockdev comparev and writev ...[2024-11-28 06:32:30.368050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:07:19.745 Test: blockdev nvme passthru rw ...passed 00:07:19.745 Test: blockdev nvme passthru vendor specific ...passed 00:07:19.745 Test: blockdev nvme admin passthru ...SGL DATA BLOCK ADDRESS 0x2aee0a000 len:0x1000 00:07:19.745 [2024-11-28 06:32:30.368175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:19.745 [2024-11-28 06:32:30.368584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:19.745 [2024-11-28 06:32:30.368609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:19.745 passed 00:07:19.745 Test: blockdev copy ...passed 00:07:19.745 Suite: bdevio tests on: Nvme2n2 00:07:19.745 Test: blockdev write read block ...passed 00:07:19.745 Test: blockdev write zeroes read block ...passed 00:07:19.745 Test: blockdev write zeroes read no split ...passed 00:07:19.745 Test: blockdev write zeroes read split ...passed 00:07:19.745 Test: blockdev write zeroes read split partial ...passed 00:07:19.745 Test: blockdev reset ...[2024-11-28 06:32:30.385022] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:19.745 [2024-11-28 06:32:30.386753] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:19.745 passed 00:07:19.745 Test: blockdev write read 8 blocks ...passed 00:07:19.745 Test: blockdev write read size > 128k ...passed 00:07:19.746 Test: blockdev write read invalid size ...passed 00:07:19.746 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:19.746 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:19.746 Test: blockdev write read max offset ...passed 00:07:19.746 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:19.746 Test: blockdev writev readv 8 blocks ...passed 00:07:19.746 Test: blockdev writev readv 30 x 1block ...passed 00:07:19.746 Test: blockdev writev readv block ...passed 00:07:19.746 Test: blockdev writev readv size > 128k ...passed 00:07:19.746 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:19.746 Test: blockdev comparev and writev ...[2024-11-28 06:32:30.391723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2aee06000 len:0x1000 00:07:19.746 [2024-11-28 06:32:30.391758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:19.746 passed 00:07:19.746 Test: blockdev nvme passthru rw ...passed 00:07:19.746 Test: blockdev nvme passthru vendor specific ...passed 00:07:19.746 Test: blockdev nvme admin passthru ...[2024-11-28 06:32:30.392350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:19.746 [2024-11-28 06:32:30.392382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:19.746 passed 00:07:19.746 Test: blockdev copy ...passed 00:07:19.746 Suite: bdevio tests on: Nvme2n1 00:07:19.746 Test: blockdev write read block ...passed 00:07:19.746 Test: blockdev write zeroes read block ...passed 00:07:19.746 Test: blockdev write zeroes read no split ...passed 00:07:19.746 Test: blockdev write zeroes read split ...passed 00:07:19.746 Test: blockdev write zeroes read split partial ...passed 00:07:19.746 Test: blockdev reset ...[2024-11-28 06:32:30.406176] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:19.746 passed 00:07:19.746 Test: blockdev write read 8 blocks ...[2024-11-28 06:32:30.407753] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:19.746 passed 00:07:19.746 Test: blockdev write read size > 128k ...passed 00:07:19.746 Test: blockdev write read invalid size ...passed 00:07:19.746 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:19.746 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:19.746 Test: blockdev write read max offset ...passed 00:07:19.746 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:19.746 Test: blockdev writev readv 8 blocks ...passed 00:07:19.746 Test: blockdev writev readv 30 x 1block ...passed 00:07:19.746 Test: blockdev writev readv block ...passed 00:07:19.746 Test: blockdev writev readv size > 128k ...passed 00:07:19.746 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:19.746 Test: blockdev comparev and writev ...[2024-11-28 06:32:30.411733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2aee02000 len:0x1000 00:07:19.746 [2024-11-28 06:32:30.411769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:19.746 passed 00:07:19.746 Test: blockdev nvme passthru rw ...passed 00:07:19.746 Test: blockdev nvme passthru vendor specific ...passed 00:07:19.746 Test: blockdev nvme admin passthru ...[2024-11-28 06:32:30.412183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:19.746 [2024-11-28 06:32:30.412209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:19.746 passed 00:07:19.746 Test: blockdev copy ...passed 00:07:19.746 Suite: bdevio tests on: Nvme1n1 00:07:19.746 Test: blockdev write read block ...passed 00:07:19.746 Test: blockdev write zeroes read block ...passed 00:07:19.746 Test: blockdev write zeroes read no split ...passed 00:07:19.746 Test: blockdev write zeroes read split ...passed 00:07:19.746 Test: blockdev write zeroes read split partial ...passed 00:07:19.746 Test: blockdev reset ...[2024-11-28 06:32:30.428005] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:07:19.746 [2024-11-28 06:32:30.430189] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:19.746 passed 00:07:19.746 Test: blockdev write read 8 blocks ...passed 00:07:19.746 Test: blockdev write read size > 128k ...passed 00:07:19.746 Test: blockdev write read invalid size ...passed 00:07:19.746 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:19.746 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:19.746 Test: blockdev write read max offset ...passed 00:07:19.746 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:19.746 Test: blockdev writev readv 8 blocks ...passed 00:07:19.746 Test: blockdev writev readv 30 x 1block ...passed 00:07:19.746 Test: blockdev writev readv block ...passed 00:07:19.746 Test: blockdev writev readv size > 128k ...passed 00:07:19.746 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:19.746 Test: blockdev comparev and writev ...[2024-11-28 06:32:30.436408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bcc0e000 len:0x1000 00:07:19.746 [2024-11-28 06:32:30.436453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:19.746 passed 00:07:19.746 Test: blockdev nvme passthru rw ...passed 00:07:19.746 Test: blockdev nvme passthru vendor specific ...passed 00:07:19.746 Test: blockdev nvme admin passthru ...[2024-11-28 06:32:30.437029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:19.746 [2024-11-28 06:32:30.437067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:19.746 passed 00:07:19.746 Test: blockdev copy ...passed 00:07:19.746 Suite: bdevio tests on: Nvme0n1 00:07:19.746 Test: blockdev write read block ...passed 00:07:19.746 Test: blockdev write zeroes read block ...passed 00:07:19.746 Test: blockdev write zeroes read no split ...passed 00:07:19.746 Test: blockdev write zeroes read split ...passed 00:07:19.746 Test: blockdev write zeroes read split partial ...passed 00:07:19.746 Test: blockdev reset ...[2024-11-28 06:32:30.449550] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:07:19.746 passed 00:07:19.746 Test: blockdev write read 8 blocks ...[2024-11-28 06:32:30.451083] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:19.746 passed 00:07:19.746 Test: blockdev write read size > 128k ...passed 00:07:19.746 Test: blockdev write read invalid size ...passed 00:07:19.746 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:19.746 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:19.746 Test: blockdev write read max offset ...passed 00:07:19.746 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:19.746 Test: blockdev writev readv 8 blocks ...passed 00:07:19.746 Test: blockdev writev readv 30 x 1block ...passed 00:07:19.746 Test: blockdev writev readv block ...passed 00:07:19.746 Test: blockdev writev readv size > 128k ...passed 00:07:19.746 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:19.746 Test: blockdev comparev and writev ...passed 00:07:19.746 Test: blockdev nvme passthru rw ...[2024-11-28 06:32:30.454689] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:19.746 separate metadata which is not supported yet. 00:07:19.746 passed 00:07:19.746 Test: blockdev nvme passthru vendor specific ...[2024-11-28 06:32:30.455049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:07:19.746 Test: blockdev nvme admin passthru ...RP2 0x0 00:07:19.746 [2024-11-28 06:32:30.455166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:19.746 passed 00:07:19.746 Test: blockdev copy ...passed 00:07:19.746 00:07:19.746 Run Summary: Type Total Ran Passed Failed Inactive 00:07:19.746 suites 6 6 n/a 0 0 00:07:19.746 tests 138 138 138 0 0 00:07:19.746 asserts 893 893 893 0 n/a 00:07:19.746 00:07:19.746 Elapsed time = 0.301 seconds 00:07:19.746 0 00:07:19.746 06:32:30 -- bdev/blockdev.sh@293 -- # killprocess 71749 00:07:19.746 06:32:30 -- common/autotest_common.sh@936 -- # '[' -z 71749 ']' 00:07:19.746 06:32:30 -- common/autotest_common.sh@940 -- # kill -0 71749 00:07:19.746 06:32:30 -- common/autotest_common.sh@941 -- # uname 00:07:19.746 06:32:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:19.746 06:32:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71749 00:07:19.746 06:32:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:19.746 06:32:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:19.746 06:32:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71749' 00:07:19.746 killing process with pid 71749 00:07:19.746 06:32:30 -- common/autotest_common.sh@955 -- # kill 71749 00:07:19.746 06:32:30 -- common/autotest_common.sh@960 -- # wait 71749 00:07:20.009 06:32:30 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:07:20.009 00:07:20.009 real 0m1.264s 00:07:20.009 user 0m3.242s 00:07:20.009 sys 0m0.229s 00:07:20.009 06:32:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:20.009 06:32:30 -- common/autotest_common.sh@10 -- # set +x 00:07:20.009 ************************************ 00:07:20.009 END TEST bdev_bounds 00:07:20.009 ************************************ 00:07:20.009 06:32:30 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:20.009 06:32:30 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:07:20.009 06:32:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.009 06:32:30 -- common/autotest_common.sh@10 -- # set +x 00:07:20.009 ************************************ 00:07:20.009 START TEST bdev_nbd 00:07:20.009 ************************************ 00:07:20.009 06:32:30 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:20.009 06:32:30 -- bdev/blockdev.sh@298 -- # uname -s 00:07:20.009 06:32:30 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:07:20.009 06:32:30 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.009 06:32:30 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:20.009 06:32:30 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:20.009 06:32:30 -- bdev/blockdev.sh@302 -- # local bdev_all 00:07:20.009 06:32:30 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:07:20.009 06:32:30 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:07:20.009 06:32:30 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:20.009 06:32:30 -- bdev/blockdev.sh@309 -- # local nbd_all 00:07:20.009 06:32:30 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:07:20.009 06:32:30 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:20.009 06:32:30 -- bdev/blockdev.sh@312 -- # local nbd_list 00:07:20.009 06:32:30 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:20.009 06:32:30 -- bdev/blockdev.sh@313 -- # local bdev_list 00:07:20.009 06:32:30 -- bdev/blockdev.sh@316 -- # nbd_pid=71797 00:07:20.009 06:32:30 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:20.009 06:32:30 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:20.009 06:32:30 -- bdev/blockdev.sh@318 -- # waitforlisten 71797 /var/tmp/spdk-nbd.sock 00:07:20.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:20.009 06:32:30 -- common/autotest_common.sh@829 -- # '[' -z 71797 ']' 00:07:20.009 06:32:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:20.009 06:32:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:20.009 06:32:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:20.009 06:32:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:20.009 06:32:30 -- common/autotest_common.sh@10 -- # set +x 00:07:20.009 [2024-11-28 06:32:30.736377] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:20.009 [2024-11-28 06:32:30.736486] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:20.270 [2024-11-28 06:32:30.873200] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.270 [2024-11-28 06:32:30.904425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.844 06:32:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:20.844 06:32:31 -- common/autotest_common.sh@862 -- # return 0 00:07:20.844 06:32:31 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:20.844 06:32:31 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.844 06:32:31 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:20.844 06:32:31 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:20.844 06:32:31 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:20.844 06:32:31 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.844 06:32:31 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:20.844 06:32:31 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:20.844 06:32:31 -- bdev/nbd_common.sh@24 -- # local i 00:07:20.844 06:32:31 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:20.844 06:32:31 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:20.844 06:32:31 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:20.844 06:32:31 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:21.107 06:32:31 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:21.107 06:32:31 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:21.107 06:32:31 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:21.107 06:32:31 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:21.107 06:32:31 -- common/autotest_common.sh@867 -- # local i 00:07:21.107 06:32:31 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:21.107 06:32:31 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:21.107 06:32:31 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:21.107 06:32:31 -- common/autotest_common.sh@871 -- # break 00:07:21.107 06:32:31 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:21.107 06:32:31 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:21.107 06:32:31 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:21.107 1+0 records in 00:07:21.107 1+0 records out 00:07:21.107 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000653226 s, 6.3 MB/s 00:07:21.107 06:32:31 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.107 06:32:31 -- common/autotest_common.sh@884 -- # size=4096 00:07:21.107 06:32:31 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.107 06:32:31 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:21.107 06:32:31 -- common/autotest_common.sh@887 -- # return 0 00:07:21.107 06:32:31 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:21.107 06:32:31 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:21.107 06:32:31 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:21.368 06:32:31 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:21.368 06:32:31 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:21.368 06:32:31 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:21.368 06:32:31 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:21.368 06:32:31 -- common/autotest_common.sh@867 -- # local i 00:07:21.368 06:32:31 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:21.368 06:32:31 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:21.368 06:32:31 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:21.368 06:32:32 -- common/autotest_common.sh@871 -- # break 00:07:21.368 06:32:32 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:21.368 06:32:32 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:21.368 06:32:32 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:21.368 1+0 records in 00:07:21.368 1+0 records out 00:07:21.368 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359756 s, 11.4 MB/s 00:07:21.368 06:32:32 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.368 06:32:32 -- common/autotest_common.sh@884 -- # size=4096 00:07:21.368 06:32:32 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.368 06:32:32 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:21.368 06:32:32 -- common/autotest_common.sh@887 -- # return 0 00:07:21.368 06:32:32 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:21.368 06:32:32 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:21.368 06:32:32 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:21.630 06:32:32 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:21.630 06:32:32 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:21.630 06:32:32 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:21.630 06:32:32 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:07:21.630 06:32:32 -- common/autotest_common.sh@867 -- # local i 00:07:21.630 06:32:32 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:21.630 06:32:32 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:21.630 06:32:32 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:07:21.630 06:32:32 -- common/autotest_common.sh@871 -- # break 00:07:21.630 06:32:32 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:21.630 06:32:32 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:21.630 06:32:32 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:21.630 1+0 records in 00:07:21.630 1+0 records out 00:07:21.630 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000652508 s, 6.3 MB/s 00:07:21.630 06:32:32 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.630 06:32:32 -- common/autotest_common.sh@884 -- # size=4096 00:07:21.630 06:32:32 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.630 06:32:32 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:21.630 06:32:32 -- common/autotest_common.sh@887 -- # return 0 00:07:21.630 06:32:32 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:21.630 06:32:32 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:21.630 06:32:32 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:21.895 06:32:32 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:21.895 06:32:32 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:21.895 06:32:32 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:21.895 06:32:32 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:07:21.895 06:32:32 -- common/autotest_common.sh@867 -- # local i 00:07:21.895 06:32:32 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:21.895 06:32:32 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:21.895 06:32:32 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:07:21.895 06:32:32 -- common/autotest_common.sh@871 -- # break 00:07:21.895 06:32:32 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:21.895 06:32:32 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:21.895 06:32:32 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:21.895 1+0 records in 00:07:21.895 1+0 records out 00:07:21.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036127 s, 11.3 MB/s 00:07:21.895 06:32:32 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.895 06:32:32 -- common/autotest_common.sh@884 -- # size=4096 00:07:21.895 06:32:32 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.895 06:32:32 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:21.895 06:32:32 -- common/autotest_common.sh@887 -- # return 0 00:07:21.895 06:32:32 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:21.895 06:32:32 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:21.895 06:32:32 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:21.895 06:32:32 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:21.895 06:32:32 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:21.895 06:32:32 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:21.895 06:32:32 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:07:21.895 06:32:32 -- common/autotest_common.sh@867 -- # local i 00:07:21.895 06:32:32 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:21.895 06:32:32 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:21.895 06:32:32 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:07:21.895 06:32:32 -- common/autotest_common.sh@871 -- # break 00:07:21.895 06:32:32 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:21.895 06:32:32 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:21.895 06:32:32 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:22.230 1+0 records in 00:07:22.230 1+0 records out 00:07:22.230 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0012811 s, 3.2 MB/s 00:07:22.230 06:32:32 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.230 06:32:32 -- common/autotest_common.sh@884 -- # size=4096 00:07:22.230 06:32:32 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.230 06:32:32 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:22.230 06:32:32 -- common/autotest_common.sh@887 -- # return 0 00:07:22.230 06:32:32 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:22.230 06:32:32 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:22.230 06:32:32 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:22.230 06:32:32 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:22.230 06:32:32 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:22.230 06:32:32 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:22.230 06:32:32 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:07:22.230 06:32:32 -- common/autotest_common.sh@867 -- # local i 00:07:22.230 06:32:32 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:22.230 06:32:32 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:22.230 06:32:32 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:07:22.230 06:32:32 -- common/autotest_common.sh@871 -- # break 00:07:22.230 06:32:32 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:22.230 06:32:32 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:22.230 06:32:32 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:22.230 1+0 records in 00:07:22.230 1+0 records out 00:07:22.230 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116357 s, 3.5 MB/s 00:07:22.230 06:32:32 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.230 06:32:32 -- common/autotest_common.sh@884 -- # size=4096 00:07:22.230 06:32:32 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.230 06:32:32 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:22.230 06:32:32 -- common/autotest_common.sh@887 -- # return 0 00:07:22.230 06:32:32 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:22.230 06:32:32 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:22.230 06:32:32 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:22.492 06:32:33 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:22.492 { 00:07:22.492 "nbd_device": "/dev/nbd0", 00:07:22.492 "bdev_name": "Nvme0n1" 00:07:22.492 }, 00:07:22.492 { 00:07:22.492 "nbd_device": "/dev/nbd1", 00:07:22.492 "bdev_name": "Nvme1n1" 00:07:22.492 }, 00:07:22.492 { 00:07:22.492 "nbd_device": "/dev/nbd2", 00:07:22.492 "bdev_name": "Nvme2n1" 00:07:22.492 }, 00:07:22.492 { 00:07:22.492 "nbd_device": "/dev/nbd3", 00:07:22.492 "bdev_name": "Nvme2n2" 00:07:22.492 }, 00:07:22.492 { 00:07:22.492 "nbd_device": "/dev/nbd4", 00:07:22.492 "bdev_name": "Nvme2n3" 00:07:22.492 }, 00:07:22.492 { 00:07:22.492 "nbd_device": "/dev/nbd5", 00:07:22.492 "bdev_name": "Nvme3n1" 00:07:22.492 } 00:07:22.492 ]' 00:07:22.492 06:32:33 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:22.492 06:32:33 -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:22.492 { 00:07:22.492 "nbd_device": "/dev/nbd0", 00:07:22.492 "bdev_name": "Nvme0n1" 00:07:22.492 }, 00:07:22.492 { 00:07:22.492 "nbd_device": "/dev/nbd1", 00:07:22.492 "bdev_name": "Nvme1n1" 00:07:22.492 }, 00:07:22.492 { 00:07:22.492 "nbd_device": "/dev/nbd2", 00:07:22.492 "bdev_name": "Nvme2n1" 00:07:22.492 }, 00:07:22.492 { 00:07:22.492 "nbd_device": "/dev/nbd3", 00:07:22.492 "bdev_name": "Nvme2n2" 00:07:22.492 }, 00:07:22.492 { 00:07:22.492 "nbd_device": "/dev/nbd4", 00:07:22.492 "bdev_name": "Nvme2n3" 00:07:22.492 }, 00:07:22.492 { 00:07:22.492 "nbd_device": "/dev/nbd5", 00:07:22.492 "bdev_name": "Nvme3n1" 00:07:22.492 } 00:07:22.492 ]' 00:07:22.492 06:32:33 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:22.492 06:32:33 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:22.492 06:32:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.492 06:32:33 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:22.492 06:32:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:22.492 06:32:33 -- bdev/nbd_common.sh@51 -- # local i 00:07:22.492 06:32:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.492 06:32:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:22.753 06:32:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:22.753 06:32:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:22.753 06:32:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:22.753 06:32:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.753 06:32:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.753 06:32:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:22.753 06:32:33 -- bdev/nbd_common.sh@41 -- # break 00:07:22.753 06:32:33 -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.754 06:32:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.754 06:32:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:23.014 06:32:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:23.014 06:32:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:23.014 06:32:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:23.014 06:32:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.014 06:32:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.014 06:32:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:23.014 06:32:33 -- bdev/nbd_common.sh@41 -- # break 00:07:23.014 06:32:33 -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.014 06:32:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.014 06:32:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:23.014 06:32:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:23.014 06:32:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:23.014 06:32:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:23.014 06:32:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.014 06:32:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.014 06:32:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:23.014 06:32:33 -- bdev/nbd_common.sh@41 -- # break 00:07:23.014 06:32:33 -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.014 06:32:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.014 06:32:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:23.276 06:32:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:23.276 06:32:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:23.276 06:32:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:23.276 06:32:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.276 06:32:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.276 06:32:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:23.276 06:32:33 -- bdev/nbd_common.sh@41 -- # break 00:07:23.276 06:32:33 -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.276 06:32:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.276 06:32:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:23.537 06:32:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:23.537 06:32:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:23.537 06:32:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:23.537 06:32:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.537 06:32:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.537 06:32:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:23.537 06:32:34 -- bdev/nbd_common.sh@41 -- # break 00:07:23.537 06:32:34 -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.537 06:32:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.537 06:32:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:23.800 06:32:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:23.800 06:32:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:23.800 06:32:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:23.800 06:32:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.800 06:32:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.800 06:32:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:23.800 06:32:34 -- bdev/nbd_common.sh@41 -- # break 00:07:23.800 06:32:34 -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.800 06:32:34 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:23.800 06:32:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.800 06:32:34 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:24.062 06:32:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:24.062 06:32:34 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:24.062 06:32:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:24.062 06:32:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:24.062 06:32:34 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:24.062 06:32:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:24.062 06:32:34 -- bdev/nbd_common.sh@65 -- # true 00:07:24.062 06:32:34 -- bdev/nbd_common.sh@65 -- # count=0 00:07:24.062 06:32:34 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:24.062 06:32:34 -- bdev/nbd_common.sh@122 -- # count=0 00:07:24.062 06:32:34 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:24.062 06:32:34 -- bdev/nbd_common.sh@127 -- # return 0 00:07:24.062 06:32:34 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:24.062 06:32:34 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.062 06:32:34 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:24.062 06:32:34 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:24.062 06:32:34 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:24.062 06:32:34 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:24.062 06:32:34 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:24.062 06:32:34 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.062 06:32:34 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:24.062 06:32:34 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:24.062 06:32:34 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:24.062 06:32:34 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:24.062 06:32:34 -- bdev/nbd_common.sh@12 -- # local i 00:07:24.062 06:32:34 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:24.062 06:32:34 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:24.062 06:32:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:24.325 /dev/nbd0 00:07:24.325 06:32:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:24.325 06:32:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:24.325 06:32:34 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:24.325 06:32:34 -- common/autotest_common.sh@867 -- # local i 00:07:24.325 06:32:34 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:24.325 06:32:34 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:24.325 06:32:34 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:24.325 06:32:34 -- common/autotest_common.sh@871 -- # break 00:07:24.325 06:32:34 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:24.325 06:32:34 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:24.325 06:32:34 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:24.325 1+0 records in 00:07:24.325 1+0 records out 00:07:24.325 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00121981 s, 3.4 MB/s 00:07:24.325 06:32:34 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.325 06:32:34 -- common/autotest_common.sh@884 -- # size=4096 00:07:24.325 06:32:34 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.325 06:32:34 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:24.325 06:32:34 -- common/autotest_common.sh@887 -- # return 0 00:07:24.325 06:32:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:24.325 06:32:34 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:24.325 06:32:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:24.585 /dev/nbd1 00:07:24.585 06:32:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:24.585 06:32:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:24.585 06:32:35 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:24.585 06:32:35 -- common/autotest_common.sh@867 -- # local i 00:07:24.585 06:32:35 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:24.585 06:32:35 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:24.585 06:32:35 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:24.585 06:32:35 -- common/autotest_common.sh@871 -- # break 00:07:24.585 06:32:35 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:24.585 06:32:35 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:24.585 06:32:35 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:24.585 1+0 records in 00:07:24.585 1+0 records out 00:07:24.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00140001 s, 2.9 MB/s 00:07:24.585 06:32:35 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.585 06:32:35 -- common/autotest_common.sh@884 -- # size=4096 00:07:24.585 06:32:35 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.585 06:32:35 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:24.585 06:32:35 -- common/autotest_common.sh@887 -- # return 0 00:07:24.585 06:32:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:24.585 06:32:35 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:24.585 06:32:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:24.585 /dev/nbd10 00:07:24.847 06:32:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:24.847 06:32:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:24.847 06:32:35 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:07:24.847 06:32:35 -- common/autotest_common.sh@867 -- # local i 00:07:24.847 06:32:35 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:24.847 06:32:35 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:24.847 06:32:35 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:07:24.847 06:32:35 -- common/autotest_common.sh@871 -- # break 00:07:24.847 06:32:35 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:24.847 06:32:35 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:24.847 06:32:35 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:24.847 1+0 records in 00:07:24.847 1+0 records out 00:07:24.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000845316 s, 4.8 MB/s 00:07:24.847 06:32:35 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.847 06:32:35 -- common/autotest_common.sh@884 -- # size=4096 00:07:24.847 06:32:35 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.847 06:32:35 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:24.847 06:32:35 -- common/autotest_common.sh@887 -- # return 0 00:07:24.847 06:32:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:24.847 06:32:35 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:24.847 06:32:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:24.847 /dev/nbd11 00:07:24.847 06:32:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:24.847 06:32:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:24.847 06:32:35 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:07:24.847 06:32:35 -- common/autotest_common.sh@867 -- # local i 00:07:24.847 06:32:35 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:24.847 06:32:35 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:24.847 06:32:35 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:07:24.847 06:32:35 -- common/autotest_common.sh@871 -- # break 00:07:24.847 06:32:35 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:24.847 06:32:35 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:24.847 06:32:35 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:24.847 1+0 records in 00:07:24.847 1+0 records out 00:07:24.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00122431 s, 3.3 MB/s 00:07:24.847 06:32:35 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.847 06:32:35 -- common/autotest_common.sh@884 -- # size=4096 00:07:24.847 06:32:35 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.109 06:32:35 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:25.109 06:32:35 -- common/autotest_common.sh@887 -- # return 0 00:07:25.109 06:32:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:25.109 06:32:35 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:25.109 06:32:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:25.109 /dev/nbd12 00:07:25.109 06:32:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:25.109 06:32:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:25.109 06:32:35 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:07:25.109 06:32:35 -- common/autotest_common.sh@867 -- # local i 00:07:25.109 06:32:35 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:25.109 06:32:35 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:25.109 06:32:35 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:07:25.109 06:32:35 -- common/autotest_common.sh@871 -- # break 00:07:25.109 06:32:35 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:25.109 06:32:35 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:25.109 06:32:35 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:25.109 1+0 records in 00:07:25.109 1+0 records out 00:07:25.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00135616 s, 3.0 MB/s 00:07:25.109 06:32:35 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.109 06:32:35 -- common/autotest_common.sh@884 -- # size=4096 00:07:25.109 06:32:35 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.109 06:32:35 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:25.109 06:32:35 -- common/autotest_common.sh@887 -- # return 0 00:07:25.109 06:32:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:25.109 06:32:35 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:25.109 06:32:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:25.370 /dev/nbd13 00:07:25.370 06:32:36 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:25.370 06:32:36 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:25.370 06:32:36 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:07:25.370 06:32:36 -- common/autotest_common.sh@867 -- # local i 00:07:25.370 06:32:36 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:25.370 06:32:36 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:25.370 06:32:36 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:07:25.370 06:32:36 -- common/autotest_common.sh@871 -- # break 00:07:25.370 06:32:36 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:25.370 06:32:36 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:25.370 06:32:36 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:25.370 1+0 records in 00:07:25.370 1+0 records out 00:07:25.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104981 s, 3.9 MB/s 00:07:25.370 06:32:36 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.370 06:32:36 -- common/autotest_common.sh@884 -- # size=4096 00:07:25.370 06:32:36 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.370 06:32:36 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:25.370 06:32:36 -- common/autotest_common.sh@887 -- # return 0 00:07:25.370 06:32:36 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:25.370 06:32:36 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:25.370 06:32:36 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:25.370 06:32:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.370 06:32:36 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:25.632 06:32:36 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:25.632 { 00:07:25.632 "nbd_device": "/dev/nbd0", 00:07:25.632 "bdev_name": "Nvme0n1" 00:07:25.632 }, 00:07:25.632 { 00:07:25.632 "nbd_device": "/dev/nbd1", 00:07:25.632 "bdev_name": "Nvme1n1" 00:07:25.632 }, 00:07:25.632 { 00:07:25.632 "nbd_device": "/dev/nbd10", 00:07:25.632 "bdev_name": "Nvme2n1" 00:07:25.632 }, 00:07:25.632 { 00:07:25.632 "nbd_device": "/dev/nbd11", 00:07:25.632 "bdev_name": "Nvme2n2" 00:07:25.632 }, 00:07:25.632 { 00:07:25.632 "nbd_device": "/dev/nbd12", 00:07:25.632 "bdev_name": "Nvme2n3" 00:07:25.632 }, 00:07:25.632 { 00:07:25.632 "nbd_device": "/dev/nbd13", 00:07:25.632 "bdev_name": "Nvme3n1" 00:07:25.632 } 00:07:25.632 ]' 00:07:25.632 06:32:36 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:25.632 { 00:07:25.632 "nbd_device": "/dev/nbd0", 00:07:25.632 "bdev_name": "Nvme0n1" 00:07:25.632 }, 00:07:25.632 { 00:07:25.632 "nbd_device": "/dev/nbd1", 00:07:25.632 "bdev_name": "Nvme1n1" 00:07:25.632 }, 00:07:25.632 { 00:07:25.632 "nbd_device": "/dev/nbd10", 00:07:25.632 "bdev_name": "Nvme2n1" 00:07:25.632 }, 00:07:25.632 { 00:07:25.632 "nbd_device": "/dev/nbd11", 00:07:25.632 "bdev_name": "Nvme2n2" 00:07:25.632 }, 00:07:25.632 { 00:07:25.632 "nbd_device": "/dev/nbd12", 00:07:25.632 "bdev_name": "Nvme2n3" 00:07:25.632 }, 00:07:25.632 { 00:07:25.632 "nbd_device": "/dev/nbd13", 00:07:25.633 "bdev_name": "Nvme3n1" 00:07:25.633 } 00:07:25.633 ]' 00:07:25.633 06:32:36 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:25.633 06:32:36 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:25.633 /dev/nbd1 00:07:25.633 /dev/nbd10 00:07:25.633 /dev/nbd11 00:07:25.633 /dev/nbd12 00:07:25.633 /dev/nbd13' 00:07:25.633 06:32:36 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:25.633 /dev/nbd1 00:07:25.633 /dev/nbd10 00:07:25.633 /dev/nbd11 00:07:25.633 /dev/nbd12 00:07:25.633 /dev/nbd13' 00:07:25.633 06:32:36 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:25.633 06:32:36 -- bdev/nbd_common.sh@65 -- # count=6 00:07:25.633 06:32:36 -- bdev/nbd_common.sh@66 -- # echo 6 00:07:25.633 06:32:36 -- bdev/nbd_common.sh@95 -- # count=6 00:07:25.633 06:32:36 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:25.633 06:32:36 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:25.633 06:32:36 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:25.633 06:32:36 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:25.633 06:32:36 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:25.633 06:32:36 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:25.633 06:32:36 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:25.633 06:32:36 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:25.633 256+0 records in 00:07:25.633 256+0 records out 00:07:25.633 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00728777 s, 144 MB/s 00:07:25.633 06:32:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:25.633 06:32:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:25.895 256+0 records in 00:07:25.895 256+0 records out 00:07:25.895 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.236995 s, 4.4 MB/s 00:07:25.895 06:32:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:25.895 06:32:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:26.156 256+0 records in 00:07:26.156 256+0 records out 00:07:26.157 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.20448 s, 5.1 MB/s 00:07:26.157 06:32:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:26.157 06:32:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:26.442 256+0 records in 00:07:26.442 256+0 records out 00:07:26.442 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.228219 s, 4.6 MB/s 00:07:26.442 06:32:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:26.442 06:32:37 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:26.704 256+0 records in 00:07:26.704 256+0 records out 00:07:26.704 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.183004 s, 5.7 MB/s 00:07:26.704 06:32:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:26.704 06:32:37 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:26.965 256+0 records in 00:07:26.965 256+0 records out 00:07:26.965 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.275702 s, 3.8 MB/s 00:07:26.965 06:32:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:26.965 06:32:37 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:26.965 256+0 records in 00:07:26.965 256+0 records out 00:07:26.965 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151559 s, 6.9 MB/s 00:07:26.965 06:32:37 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:26.965 06:32:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:26.965 06:32:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:26.965 06:32:37 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:26.965 06:32:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:26.965 06:32:37 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:26.965 06:32:37 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:26.965 06:32:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:26.965 06:32:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:26.965 06:32:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:26.965 06:32:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:26.965 06:32:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:26.965 06:32:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:26.965 06:32:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:26.965 06:32:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:26.965 06:32:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:26.965 06:32:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:26.965 06:32:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:26.965 06:32:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:26.965 06:32:37 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:26.965 06:32:37 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:26.965 06:32:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.965 06:32:37 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:26.965 06:32:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:26.965 06:32:37 -- bdev/nbd_common.sh@51 -- # local i 00:07:26.965 06:32:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.965 06:32:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:27.227 06:32:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:27.227 06:32:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:27.227 06:32:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:27.227 06:32:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.227 06:32:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.227 06:32:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:27.227 06:32:37 -- bdev/nbd_common.sh@41 -- # break 00:07:27.227 06:32:37 -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.227 06:32:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.227 06:32:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:27.489 06:32:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:27.489 06:32:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:27.489 06:32:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:27.489 06:32:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.489 06:32:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.489 06:32:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:27.489 06:32:38 -- bdev/nbd_common.sh@41 -- # break 00:07:27.489 06:32:38 -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.489 06:32:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.489 06:32:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:27.750 06:32:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:27.750 06:32:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:27.750 06:32:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:27.750 06:32:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.750 06:32:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.750 06:32:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:27.750 06:32:38 -- bdev/nbd_common.sh@41 -- # break 00:07:27.750 06:32:38 -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.751 06:32:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.751 06:32:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:28.011 06:32:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:28.011 06:32:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:28.011 06:32:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:28.011 06:32:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.011 06:32:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.011 06:32:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:28.011 06:32:38 -- bdev/nbd_common.sh@41 -- # break 00:07:28.011 06:32:38 -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.011 06:32:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:28.011 06:32:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:28.272 06:32:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:28.272 06:32:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:28.272 06:32:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:28.272 06:32:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.272 06:32:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.272 06:32:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:28.272 06:32:38 -- bdev/nbd_common.sh@41 -- # break 00:07:28.272 06:32:38 -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.272 06:32:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:28.272 06:32:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:28.272 06:32:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:28.272 06:32:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:28.272 06:32:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:28.272 06:32:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.272 06:32:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.272 06:32:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:28.272 06:32:39 -- bdev/nbd_common.sh@41 -- # break 00:07:28.272 06:32:39 -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.272 06:32:39 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:28.272 06:32:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.272 06:32:39 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:28.533 06:32:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:28.533 06:32:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:28.533 06:32:39 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:28.533 06:32:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:28.533 06:32:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:28.533 06:32:39 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:28.533 06:32:39 -- bdev/nbd_common.sh@65 -- # true 00:07:28.533 06:32:39 -- bdev/nbd_common.sh@65 -- # count=0 00:07:28.533 06:32:39 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:28.533 06:32:39 -- bdev/nbd_common.sh@104 -- # count=0 00:07:28.533 06:32:39 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:28.533 06:32:39 -- bdev/nbd_common.sh@109 -- # return 0 00:07:28.533 06:32:39 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:28.533 06:32:39 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.533 06:32:39 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:28.533 06:32:39 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:07:28.533 06:32:39 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:07:28.533 06:32:39 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:28.794 malloc_lvol_verify 00:07:28.794 06:32:39 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:28.794 c52923c9-284a-4217-991e-2f451398fff4 00:07:29.057 06:32:39 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:29.057 9c7ea102-30e3-4d03-a4b3-c1730dc3deba 00:07:29.057 06:32:39 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:29.319 /dev/nbd0 00:07:29.319 06:32:39 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:07:29.319 mke2fs 1.47.0 (5-Feb-2023) 00:07:29.319 Discarding device blocks: 0/4096 done 00:07:29.319 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:29.319 00:07:29.319 Allocating group tables: 0/1 done 00:07:29.319 Writing inode tables: 0/1 done 00:07:29.319 Creating journal (1024 blocks): done 00:07:29.319 Writing superblocks and filesystem accounting information: 0/1 done 00:07:29.319 00:07:29.319 06:32:39 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:07:29.319 06:32:39 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:29.319 06:32:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.319 06:32:39 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:29.319 06:32:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:29.319 06:32:39 -- bdev/nbd_common.sh@51 -- # local i 00:07:29.319 06:32:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:29.319 06:32:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:29.578 06:32:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:29.578 06:32:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:29.578 06:32:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:29.578 06:32:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:29.578 06:32:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:29.578 06:32:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:29.578 06:32:40 -- bdev/nbd_common.sh@41 -- # break 00:07:29.578 06:32:40 -- bdev/nbd_common.sh@45 -- # return 0 00:07:29.578 06:32:40 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:07:29.578 06:32:40 -- bdev/nbd_common.sh@147 -- # return 0 00:07:29.578 06:32:40 -- bdev/blockdev.sh@324 -- # killprocess 71797 00:07:29.578 06:32:40 -- common/autotest_common.sh@936 -- # '[' -z 71797 ']' 00:07:29.578 06:32:40 -- common/autotest_common.sh@940 -- # kill -0 71797 00:07:29.578 06:32:40 -- common/autotest_common.sh@941 -- # uname 00:07:29.578 06:32:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:29.578 06:32:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71797 00:07:29.578 06:32:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:29.578 killing process with pid 71797 00:07:29.578 06:32:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:29.578 06:32:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71797' 00:07:29.578 06:32:40 -- common/autotest_common.sh@955 -- # kill 71797 00:07:29.579 06:32:40 -- common/autotest_common.sh@960 -- # wait 71797 00:07:29.839 06:32:40 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:07:29.839 00:07:29.839 real 0m9.846s 00:07:29.839 user 0m13.448s 00:07:29.839 sys 0m3.365s 00:07:29.839 06:32:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:29.839 ************************************ 00:07:29.839 END TEST bdev_nbd 00:07:29.839 ************************************ 00:07:29.839 06:32:40 -- common/autotest_common.sh@10 -- # set +x 00:07:29.839 06:32:40 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:07:29.839 06:32:40 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:07:29.839 skipping fio tests on NVMe due to multi-ns failures. 00:07:29.839 06:32:40 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:29.839 06:32:40 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:29.839 06:32:40 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:29.839 06:32:40 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:07:29.839 06:32:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.839 06:32:40 -- common/autotest_common.sh@10 -- # set +x 00:07:29.839 ************************************ 00:07:29.839 START TEST bdev_verify 00:07:29.839 ************************************ 00:07:29.839 06:32:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:30.100 [2024-11-28 06:32:40.664051] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:30.100 [2024-11-28 06:32:40.664201] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72171 ] 00:07:30.100 [2024-11-28 06:32:40.802503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:30.361 [2024-11-28 06:32:40.874173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.361 [2024-11-28 06:32:40.874253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.623 Running I/O for 5 seconds... 00:07:35.918 00:07:35.918 Latency(us) 00:07:35.918 [2024-11-28T06:32:46.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.918 [2024-11-28T06:32:46.688Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:35.918 Verification LBA range: start 0x0 length 0xbd0bd 00:07:35.918 Nvme0n1 : 5.05 2389.05 9.33 0.00 0.00 53452.64 5293.29 66544.25 00:07:35.918 [2024-11-28T06:32:46.688Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:35.918 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:35.918 Nvme0n1 : 5.05 2389.78 9.34 0.00 0.00 53433.75 5520.15 76223.41 00:07:35.918 [2024-11-28T06:32:46.688Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:35.918 Verification LBA range: start 0x0 length 0xa0000 00:07:35.918 Nvme1n1 : 5.05 2386.92 9.32 0.00 0.00 53445.99 8620.50 63317.86 00:07:35.918 [2024-11-28T06:32:46.688Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:35.918 Verification LBA range: start 0xa0000 length 0xa0000 00:07:35.918 Nvme1n1 : 5.05 2389.12 9.33 0.00 0.00 53337.06 6200.71 68560.74 00:07:35.918 [2024-11-28T06:32:46.688Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:35.918 Verification LBA range: start 0x0 length 0x80000 00:07:35.918 Nvme2n1 : 5.05 2386.27 9.32 0.00 0.00 53325.11 9124.63 55655.19 00:07:35.918 [2024-11-28T06:32:46.688Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:35.918 Verification LBA range: start 0x80000 length 0x80000 00:07:35.918 Nvme2n1 : 5.05 2393.28 9.35 0.00 0.00 53169.07 2394.58 65737.65 00:07:35.918 [2024-11-28T06:32:46.688Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:35.918 Verification LBA range: start 0x0 length 0x80000 00:07:35.918 Nvme2n2 : 5.05 2385.62 9.32 0.00 0.00 53283.27 9527.93 54041.99 00:07:35.918 [2024-11-28T06:32:46.688Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:35.918 Verification LBA range: start 0x80000 length 0x80000 00:07:35.918 Nvme2n2 : 5.06 2391.21 9.34 0.00 0.00 53126.60 6049.48 62107.96 00:07:35.918 [2024-11-28T06:32:46.688Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:35.918 Verification LBA range: start 0x0 length 0x80000 00:07:35.918 Nvme2n3 : 5.06 2390.76 9.34 0.00 0.00 53124.90 1020.85 49605.71 00:07:35.918 [2024-11-28T06:32:46.688Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:35.918 Verification LBA range: start 0x80000 length 0x80000 00:07:35.918 Nvme2n3 : 5.06 2390.63 9.34 0.00 0.00 53076.20 6654.42 58881.58 00:07:35.918 [2024-11-28T06:32:46.688Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:35.918 Verification LBA range: start 0x0 length 0x20000 00:07:35.918 Nvme3n1 : 5.06 2389.19 9.33 0.00 0.00 53095.21 3780.92 49807.36 00:07:35.918 [2024-11-28T06:32:46.688Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:35.918 Verification LBA range: start 0x20000 length 0x20000 00:07:35.918 Nvme3n1 : 5.07 2394.50 9.35 0.00 0.00 52939.01 2041.70 55251.89 00:07:35.918 [2024-11-28T06:32:46.688Z] =================================================================================================================== 00:07:35.918 [2024-11-28T06:32:46.688Z] Total : 28676.32 112.02 0.00 0.00 53233.76 1020.85 76223.41 00:08:02.501 00:08:02.501 real 0m29.300s 00:08:02.501 user 0m57.403s 00:08:02.501 sys 0m0.483s 00:08:02.501 ************************************ 00:08:02.501 END TEST bdev_verify 00:08:02.501 ************************************ 00:08:02.501 06:33:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:02.501 06:33:09 -- common/autotest_common.sh@10 -- # set +x 00:08:02.501 06:33:09 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:02.501 06:33:09 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:08:02.501 06:33:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.501 06:33:09 -- common/autotest_common.sh@10 -- # set +x 00:08:02.501 ************************************ 00:08:02.501 START TEST bdev_verify_big_io 00:08:02.501 ************************************ 00:08:02.501 06:33:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:02.501 [2024-11-28 06:33:10.022251] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:02.501 [2024-11-28 06:33:10.022382] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72507 ] 00:08:02.501 [2024-11-28 06:33:10.158171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:02.501 [2024-11-28 06:33:10.226560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.501 [2024-11-28 06:33:10.226658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.501 Running I/O for 5 seconds... 00:08:05.805 00:08:05.805 Latency(us) 00:08:05.805 [2024-11-28T06:33:16.575Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:05.805 [2024-11-28T06:33:16.575Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:05.805 Verification LBA range: start 0x0 length 0xbd0b 00:08:05.805 Nvme0n1 : 5.34 260.72 16.29 0.00 0.00 483133.00 41943.04 600108.11 00:08:05.805 [2024-11-28T06:33:16.575Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:05.805 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:05.806 Nvme0n1 : 5.39 209.50 13.09 0.00 0.00 600093.96 13913.80 877577.45 00:08:05.806 [2024-11-28T06:33:16.576Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:05.806 Verification LBA range: start 0x0 length 0xa000 00:08:05.806 Nvme1n1 : 5.34 260.61 16.29 0.00 0.00 477155.28 43152.94 551712.30 00:08:05.806 [2024-11-28T06:33:16.576Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:05.806 Verification LBA range: start 0xa000 length 0xa000 00:08:05.806 Nvme1n1 : 5.39 209.44 13.09 0.00 0.00 587901.41 14417.92 787238.60 00:08:05.806 [2024-11-28T06:33:16.576Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:05.806 Verification LBA range: start 0x0 length 0x8000 00:08:05.806 Nvme2n1 : 5.34 260.54 16.28 0.00 0.00 471174.23 43556.23 503316.48 00:08:05.806 [2024-11-28T06:33:16.576Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:05.806 Verification LBA range: start 0x8000 length 0x8000 00:08:05.806 Nvme2n1 : 5.40 217.55 13.60 0.00 0.00 556898.48 8570.09 658183.09 00:08:05.806 [2024-11-28T06:33:16.576Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:05.806 Verification LBA range: start 0x0 length 0x8000 00:08:05.806 Nvme2n2 : 5.36 266.57 16.66 0.00 0.00 455946.18 23290.49 454920.66 00:08:05.806 [2024-11-28T06:33:16.576Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:05.806 Verification LBA range: start 0x8000 length 0x8000 00:08:05.806 Nvme2n2 : 5.40 217.50 13.59 0.00 0.00 544915.45 9175.04 609787.27 00:08:05.806 [2024-11-28T06:33:16.576Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:05.806 Verification LBA range: start 0x0 length 0x8000 00:08:05.806 Nvme2n3 : 5.37 275.24 17.20 0.00 0.00 437948.15 3377.62 448467.89 00:08:05.806 [2024-11-28T06:33:16.576Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:05.806 Verification LBA range: start 0x8000 length 0x8000 00:08:05.806 Nvme2n3 : 5.46 261.71 16.36 0.00 0.00 445045.53 5847.83 535580.36 00:08:05.806 [2024-11-28T06:33:16.576Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:05.806 Verification LBA range: start 0x0 length 0x2000 00:08:05.806 Nvme3n1 : 5.37 283.56 17.72 0.00 0.00 420822.07 2445.00 451694.28 00:08:05.806 [2024-11-28T06:33:16.576Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:05.806 Verification LBA range: start 0x2000 length 0x2000 00:08:05.806 Nvme3n1 : 5.54 375.48 23.47 0.00 0.00 305069.55 348.16 503316.48 00:08:05.806 [2024-11-28T06:33:16.576Z] =================================================================================================================== 00:08:05.806 [2024-11-28T06:33:16.576Z] Total : 3098.42 193.65 0.00 0.00 468723.43 348.16 877577.45 00:08:07.184 00:08:07.184 real 0m7.555s 00:08:07.184 user 0m14.218s 00:08:07.184 sys 0m0.331s 00:08:07.184 06:33:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:07.184 06:33:17 -- common/autotest_common.sh@10 -- # set +x 00:08:07.184 ************************************ 00:08:07.184 END TEST bdev_verify_big_io 00:08:07.184 ************************************ 00:08:07.184 06:33:17 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:07.184 06:33:17 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:07.184 06:33:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:07.184 06:33:17 -- common/autotest_common.sh@10 -- # set +x 00:08:07.184 ************************************ 00:08:07.184 START TEST bdev_write_zeroes 00:08:07.184 ************************************ 00:08:07.184 06:33:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:07.184 [2024-11-28 06:33:17.623116] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:07.184 [2024-11-28 06:33:17.623242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72610 ] 00:08:07.184 [2024-11-28 06:33:17.755900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.184 [2024-11-28 06:33:17.797909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.442 Running I/O for 1 seconds... 00:08:08.827 00:08:08.827 Latency(us) 00:08:08.827 [2024-11-28T06:33:19.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.827 [2024-11-28T06:33:19.597Z] Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:08.827 Nvme0n1 : 1.01 11500.31 44.92 0.00 0.00 11109.39 6125.10 20366.57 00:08:08.827 [2024-11-28T06:33:19.597Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:08.827 Nvme1n1 : 1.01 11505.79 44.94 0.00 0.00 11091.00 7158.55 20265.75 00:08:08.827 [2024-11-28T06:33:19.597Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:08.827 Nvme2n1 : 1.01 11492.78 44.89 0.00 0.00 11078.36 6351.95 20064.10 00:08:08.827 [2024-11-28T06:33:19.597Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:08.827 Nvme2n2 : 1.01 11479.73 44.84 0.00 0.00 11077.13 6225.92 19963.27 00:08:08.827 [2024-11-28T06:33:19.597Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:08.827 Nvme2n3 : 1.02 11466.80 44.79 0.00 0.00 11075.29 5847.83 20265.75 00:08:08.827 [2024-11-28T06:33:19.597Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:08.827 Nvme3n1 : 1.02 11453.88 44.74 0.00 0.00 11067.14 5747.00 20064.10 00:08:08.827 [2024-11-28T06:33:19.597Z] =================================================================================================================== 00:08:08.827 [2024-11-28T06:33:19.597Z] Total : 68899.29 269.14 0.00 0.00 11083.05 5747.00 20366.57 00:08:08.827 00:08:08.827 real 0m1.827s 00:08:08.827 user 0m1.542s 00:08:08.827 sys 0m0.173s 00:08:08.827 06:33:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:08.827 06:33:19 -- common/autotest_common.sh@10 -- # set +x 00:08:08.827 ************************************ 00:08:08.827 END TEST bdev_write_zeroes 00:08:08.827 ************************************ 00:08:08.827 06:33:19 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:08.827 06:33:19 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:08.827 06:33:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:08.827 06:33:19 -- common/autotest_common.sh@10 -- # set +x 00:08:08.827 ************************************ 00:08:08.827 START TEST bdev_json_nonenclosed 00:08:08.827 ************************************ 00:08:08.827 06:33:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:08.827 [2024-11-28 06:33:19.490420] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:08.827 [2024-11-28 06:33:19.490551] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72647 ] 00:08:09.089 [2024-11-28 06:33:19.625724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.089 [2024-11-28 06:33:19.666310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.089 [2024-11-28 06:33:19.666476] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:09.089 [2024-11-28 06:33:19.666502] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:09.089 00:08:09.089 real 0m0.318s 00:08:09.089 user 0m0.125s 00:08:09.089 sys 0m0.088s 00:08:09.089 06:33:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:09.089 ************************************ 00:08:09.089 END TEST bdev_json_nonenclosed 00:08:09.089 ************************************ 00:08:09.089 06:33:19 -- common/autotest_common.sh@10 -- # set +x 00:08:09.089 06:33:19 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:09.089 06:33:19 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:09.089 06:33:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:09.089 06:33:19 -- common/autotest_common.sh@10 -- # set +x 00:08:09.089 ************************************ 00:08:09.089 START TEST bdev_json_nonarray 00:08:09.089 ************************************ 00:08:09.089 06:33:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:09.089 [2024-11-28 06:33:19.853603] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:09.089 [2024-11-28 06:33:19.853726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72672 ] 00:08:09.351 [2024-11-28 06:33:19.989303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.351 [2024-11-28 06:33:20.030355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.351 [2024-11-28 06:33:20.030536] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:09.351 [2024-11-28 06:33:20.030568] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:09.351 00:08:09.351 real 0m0.317s 00:08:09.351 user 0m0.135s 00:08:09.351 sys 0m0.080s 00:08:09.351 06:33:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:09.351 ************************************ 00:08:09.351 END TEST bdev_json_nonarray 00:08:09.351 ************************************ 00:08:09.351 06:33:20 -- common/autotest_common.sh@10 -- # set +x 00:08:09.612 06:33:20 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:08:09.612 06:33:20 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:08:09.612 06:33:20 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:08:09.612 06:33:20 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:08:09.612 06:33:20 -- bdev/blockdev.sh@809 -- # cleanup 00:08:09.612 06:33:20 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:09.612 06:33:20 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:09.612 06:33:20 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:08:09.612 06:33:20 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:08:09.612 06:33:20 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:08:09.612 06:33:20 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:08:09.612 00:08:09.612 real 0m53.442s 00:08:09.612 user 1m32.525s 00:08:09.612 sys 0m5.620s 00:08:09.612 06:33:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:09.612 06:33:20 -- common/autotest_common.sh@10 -- # set +x 00:08:09.612 ************************************ 00:08:09.612 END TEST blockdev_nvme 00:08:09.612 ************************************ 00:08:09.612 06:33:20 -- spdk/autotest.sh@206 -- # uname -s 00:08:09.612 06:33:20 -- spdk/autotest.sh@206 -- # [[ Linux == Linux ]] 00:08:09.612 06:33:20 -- spdk/autotest.sh@207 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:09.612 06:33:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:09.612 06:33:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:09.612 06:33:20 -- common/autotest_common.sh@10 -- # set +x 00:08:09.612 ************************************ 00:08:09.612 START TEST blockdev_nvme_gpt 00:08:09.612 ************************************ 00:08:09.612 06:33:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:09.612 * Looking for test storage... 00:08:09.612 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:09.612 06:33:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:09.612 06:33:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:09.612 06:33:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:09.612 06:33:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:09.612 06:33:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:09.612 06:33:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:09.612 06:33:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:09.612 06:33:20 -- scripts/common.sh@335 -- # IFS=.-: 00:08:09.612 06:33:20 -- scripts/common.sh@335 -- # read -ra ver1 00:08:09.612 06:33:20 -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.612 06:33:20 -- scripts/common.sh@336 -- # read -ra ver2 00:08:09.612 06:33:20 -- scripts/common.sh@337 -- # local 'op=<' 00:08:09.612 06:33:20 -- scripts/common.sh@339 -- # ver1_l=2 00:08:09.612 06:33:20 -- scripts/common.sh@340 -- # ver2_l=1 00:08:09.612 06:33:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:09.612 06:33:20 -- scripts/common.sh@343 -- # case "$op" in 00:08:09.612 06:33:20 -- scripts/common.sh@344 -- # : 1 00:08:09.612 06:33:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:09.612 06:33:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.612 06:33:20 -- scripts/common.sh@364 -- # decimal 1 00:08:09.612 06:33:20 -- scripts/common.sh@352 -- # local d=1 00:08:09.612 06:33:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.612 06:33:20 -- scripts/common.sh@354 -- # echo 1 00:08:09.612 06:33:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:09.612 06:33:20 -- scripts/common.sh@365 -- # decimal 2 00:08:09.612 06:33:20 -- scripts/common.sh@352 -- # local d=2 00:08:09.612 06:33:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.612 06:33:20 -- scripts/common.sh@354 -- # echo 2 00:08:09.612 06:33:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:09.612 06:33:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:09.612 06:33:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:09.612 06:33:20 -- scripts/common.sh@367 -- # return 0 00:08:09.612 06:33:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.612 06:33:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:09.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.612 --rc genhtml_branch_coverage=1 00:08:09.612 --rc genhtml_function_coverage=1 00:08:09.612 --rc genhtml_legend=1 00:08:09.612 --rc geninfo_all_blocks=1 00:08:09.612 --rc geninfo_unexecuted_blocks=1 00:08:09.612 00:08:09.612 ' 00:08:09.612 06:33:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:09.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.612 --rc genhtml_branch_coverage=1 00:08:09.612 --rc genhtml_function_coverage=1 00:08:09.612 --rc genhtml_legend=1 00:08:09.612 --rc geninfo_all_blocks=1 00:08:09.612 --rc geninfo_unexecuted_blocks=1 00:08:09.612 00:08:09.612 ' 00:08:09.612 06:33:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:09.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.612 --rc genhtml_branch_coverage=1 00:08:09.612 --rc genhtml_function_coverage=1 00:08:09.612 --rc genhtml_legend=1 00:08:09.612 --rc geninfo_all_blocks=1 00:08:09.612 --rc geninfo_unexecuted_blocks=1 00:08:09.612 00:08:09.612 ' 00:08:09.612 06:33:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:09.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.612 --rc genhtml_branch_coverage=1 00:08:09.612 --rc genhtml_function_coverage=1 00:08:09.612 --rc genhtml_legend=1 00:08:09.612 --rc geninfo_all_blocks=1 00:08:09.613 --rc geninfo_unexecuted_blocks=1 00:08:09.613 00:08:09.613 ' 00:08:09.613 06:33:20 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:09.613 06:33:20 -- bdev/nbd_common.sh@6 -- # set -e 00:08:09.613 06:33:20 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:09.613 06:33:20 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:09.613 06:33:20 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:09.613 06:33:20 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:09.613 06:33:20 -- bdev/blockdev.sh@18 -- # : 00:08:09.613 06:33:20 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:08:09.613 06:33:20 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:08:09.613 06:33:20 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:08:09.613 06:33:20 -- bdev/blockdev.sh@672 -- # uname -s 00:08:09.613 06:33:20 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:08:09.613 06:33:20 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:08:09.613 06:33:20 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:08:09.613 06:33:20 -- bdev/blockdev.sh@681 -- # crypto_device= 00:08:09.613 06:33:20 -- bdev/blockdev.sh@682 -- # dek= 00:08:09.613 06:33:20 -- bdev/blockdev.sh@683 -- # env_ctx= 00:08:09.613 06:33:20 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:08:09.613 06:33:20 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:08:09.613 06:33:20 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:08:09.613 06:33:20 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:08:09.613 06:33:20 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:08:09.613 06:33:20 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=72744 00:08:09.613 06:33:20 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:09.613 06:33:20 -- bdev/blockdev.sh@47 -- # waitforlisten 72744 00:08:09.613 06:33:20 -- common/autotest_common.sh@829 -- # '[' -z 72744 ']' 00:08:09.613 06:33:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.613 06:33:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:09.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.613 06:33:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.613 06:33:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:09.613 06:33:20 -- common/autotest_common.sh@10 -- # set +x 00:08:09.613 06:33:20 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:09.871 [2024-11-28 06:33:20.429430] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:09.871 [2024-11-28 06:33:20.429539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72744 ] 00:08:09.871 [2024-11-28 06:33:20.562808] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.871 [2024-11-28 06:33:20.603429] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:09.871 [2024-11-28 06:33:20.603656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.812 06:33:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:10.812 06:33:21 -- common/autotest_common.sh@862 -- # return 0 00:08:10.812 06:33:21 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:08:10.812 06:33:21 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:08:10.812 06:33:21 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:11.072 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:11.072 Waiting for block devices as requested 00:08:11.072 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:08:11.072 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:08:11.332 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:08:11.332 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:08:16.616 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:08:16.616 06:33:26 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:08:16.616 06:33:26 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:08:16.616 06:33:26 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:08:16.616 06:33:26 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:08:16.616 06:33:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:16.616 06:33:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:08:16.616 06:33:26 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:08:16.616 06:33:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:08:16.616 06:33:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:16.616 06:33:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:16.616 06:33:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:08:16.616 06:33:26 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:08:16.616 06:33:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:16.616 06:33:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:16.616 06:33:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:16.616 06:33:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:08:16.616 06:33:26 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:08:16.616 06:33:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:16.616 06:33:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:16.616 06:33:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:16.616 06:33:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:08:16.616 06:33:26 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:08:16.616 06:33:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:16.616 06:33:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:16.617 06:33:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:16.617 06:33:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:08:16.617 06:33:26 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:08:16.617 06:33:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:16.617 06:33:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:16.617 06:33:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:16.617 06:33:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:08:16.617 06:33:26 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:08:16.617 06:33:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:16.617 06:33:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:16.617 06:33:26 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:16.617 06:33:26 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:08:16.617 06:33:26 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:08:16.617 06:33:26 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:16.617 06:33:26 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:16.617 06:33:26 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:07.0/nvme/nvme3/nvme3n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n2' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n3' '/sys/bus/pci/drivers/nvme/0000:00:09.0/nvme/nvme0/nvme0c0n1') 00:08:16.617 06:33:26 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:08:16.617 06:33:26 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:08:16.617 06:33:26 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:16.617 06:33:26 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:08:16.617 06:33:26 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme2n1 00:08:16.617 06:33:26 -- bdev/blockdev.sh@111 -- # parted /dev/nvme2n1 -ms print 00:08:16.617 06:33:26 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme2n1: unrecognised disk label 00:08:16.617 BYT; 00:08:16.617 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:16.617 06:33:26 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme2n1: unrecognised disk label 00:08:16.617 BYT; 00:08:16.617 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\2\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:16.617 06:33:26 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme2n1 00:08:16.617 06:33:26 -- bdev/blockdev.sh@114 -- # break 00:08:16.617 06:33:26 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme2n1 ]] 00:08:16.617 06:33:26 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:16.617 06:33:26 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:16.617 06:33:26 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme2n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:16.617 06:33:27 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:08:16.617 06:33:27 -- scripts/common.sh@410 -- # local spdk_guid 00:08:16.617 06:33:27 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:16.617 06:33:27 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:16.617 06:33:27 -- scripts/common.sh@415 -- # IFS='()' 00:08:16.617 06:33:27 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:08:16.617 06:33:27 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:16.617 06:33:27 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:16.617 06:33:27 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:16.617 06:33:27 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:16.617 06:33:27 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:16.617 06:33:27 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:08:16.617 06:33:27 -- scripts/common.sh@422 -- # local spdk_guid 00:08:16.617 06:33:27 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:16.617 06:33:27 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:16.617 06:33:27 -- scripts/common.sh@427 -- # IFS='()' 00:08:16.617 06:33:27 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:08:16.617 06:33:27 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:16.617 06:33:27 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:16.617 06:33:27 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:16.617 06:33:27 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:16.617 06:33:27 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:16.617 06:33:27 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme2n1 00:08:17.599 The operation has completed successfully. 00:08:17.599 06:33:28 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme2n1 00:08:18.528 The operation has completed successfully. 00:08:18.528 06:33:29 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:19.153 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:19.411 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:08:19.411 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:08:19.411 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:08:19.411 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:08:19.411 06:33:30 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:08:19.411 06:33:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.411 06:33:30 -- common/autotest_common.sh@10 -- # set +x 00:08:19.411 [] 00:08:19.411 06:33:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.411 06:33:30 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:08:19.411 06:33:30 -- bdev/blockdev.sh@79 -- # local json 00:08:19.411 06:33:30 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:08:19.411 06:33:30 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:19.411 06:33:30 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:08:19.411 06:33:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.411 06:33:30 -- common/autotest_common.sh@10 -- # set +x 00:08:19.668 06:33:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.668 06:33:30 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:08:19.668 06:33:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.668 06:33:30 -- common/autotest_common.sh@10 -- # set +x 00:08:19.668 06:33:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.668 06:33:30 -- bdev/blockdev.sh@738 -- # cat 00:08:19.668 06:33:30 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:08:19.668 06:33:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.668 06:33:30 -- common/autotest_common.sh@10 -- # set +x 00:08:19.668 06:33:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.668 06:33:30 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:08:19.668 06:33:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.668 06:33:30 -- common/autotest_common.sh@10 -- # set +x 00:08:19.668 06:33:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.668 06:33:30 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:19.668 06:33:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.668 06:33:30 -- common/autotest_common.sh@10 -- # set +x 00:08:19.668 06:33:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.668 06:33:30 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:08:19.668 06:33:30 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:08:19.668 06:33:30 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:08:19.668 06:33:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.668 06:33:30 -- common/autotest_common.sh@10 -- # set +x 00:08:19.927 06:33:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.927 06:33:30 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:08:19.927 06:33:30 -- bdev/blockdev.sh@747 -- # jq -r .name 00:08:19.928 06:33:30 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "09b8ad17-a849-4034-8c99-da30016ab0fd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "09b8ad17-a849-4034-8c99-da30016ab0fd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "85ebe40f-8a92-463b-bdd2-054e97280a73"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "85ebe40f-8a92-463b-bdd2-054e97280a73",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "2737bba1-e399-4998-802a-d9f34f3ac824"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2737bba1-e399-4998-802a-d9f34f3ac824",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "6f3bf620-9742-4f67-9ae2-64ddf0ed6f7f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6f3bf620-9742-4f67-9ae2-64ddf0ed6f7f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "06effc90-54f2-4b4d-936b-c94e18da684c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "06effc90-54f2-4b4d-936b-c94e18da684c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:19.928 06:33:30 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:08:19.928 06:33:30 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:08:19.928 06:33:30 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:08:19.928 06:33:30 -- bdev/blockdev.sh@752 -- # killprocess 72744 00:08:19.928 06:33:30 -- common/autotest_common.sh@936 -- # '[' -z 72744 ']' 00:08:19.928 06:33:30 -- common/autotest_common.sh@940 -- # kill -0 72744 00:08:19.928 06:33:30 -- common/autotest_common.sh@941 -- # uname 00:08:19.928 06:33:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:19.928 06:33:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72744 00:08:19.928 06:33:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:19.928 06:33:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:19.928 killing process with pid 72744 00:08:19.928 06:33:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72744' 00:08:19.928 06:33:30 -- common/autotest_common.sh@955 -- # kill 72744 00:08:19.928 06:33:30 -- common/autotest_common.sh@960 -- # wait 72744 00:08:20.186 06:33:30 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:20.186 06:33:30 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:20.186 06:33:30 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:20.186 06:33:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:20.186 06:33:30 -- common/autotest_common.sh@10 -- # set +x 00:08:20.186 ************************************ 00:08:20.186 START TEST bdev_hello_world 00:08:20.186 ************************************ 00:08:20.186 06:33:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:20.186 [2024-11-28 06:33:30.899505] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:20.187 [2024-11-28 06:33:30.899615] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73379 ] 00:08:20.444 [2024-11-28 06:33:31.033555] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.444 [2024-11-28 06:33:31.073479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.703 [2024-11-28 06:33:31.436424] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:20.703 [2024-11-28 06:33:31.436479] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:08:20.703 [2024-11-28 06:33:31.436498] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:20.703 [2024-11-28 06:33:31.438579] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:20.703 [2024-11-28 06:33:31.438961] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:20.703 [2024-11-28 06:33:31.438994] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:20.703 [2024-11-28 06:33:31.439203] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:20.703 00:08:20.703 [2024-11-28 06:33:31.439235] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:21.270 00:08:21.270 real 0m1.065s 00:08:21.270 user 0m0.726s 00:08:21.270 sys 0m0.234s 00:08:21.270 06:33:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:21.270 06:33:31 -- common/autotest_common.sh@10 -- # set +x 00:08:21.270 ************************************ 00:08:21.270 END TEST bdev_hello_world 00:08:21.270 ************************************ 00:08:21.270 06:33:31 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:08:21.270 06:33:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:21.270 06:33:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:21.270 06:33:31 -- common/autotest_common.sh@10 -- # set +x 00:08:21.270 ************************************ 00:08:21.270 START TEST bdev_bounds 00:08:21.270 ************************************ 00:08:21.270 06:33:31 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:08:21.270 06:33:31 -- bdev/blockdev.sh@288 -- # bdevio_pid=73410 00:08:21.270 06:33:31 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:21.270 Process bdevio pid: 73410 00:08:21.270 06:33:31 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 73410' 00:08:21.270 06:33:31 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:21.270 06:33:31 -- bdev/blockdev.sh@291 -- # waitforlisten 73410 00:08:21.270 06:33:31 -- common/autotest_common.sh@829 -- # '[' -z 73410 ']' 00:08:21.270 06:33:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.270 06:33:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:21.270 06:33:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.271 06:33:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:21.271 06:33:31 -- common/autotest_common.sh@10 -- # set +x 00:08:21.271 [2024-11-28 06:33:31.995727] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:21.271 [2024-11-28 06:33:31.995841] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73410 ] 00:08:21.528 [2024-11-28 06:33:32.130372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:21.528 [2024-11-28 06:33:32.171934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.528 [2024-11-28 06:33:32.172246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.528 [2024-11-28 06:33:32.172311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:22.094 06:33:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:22.094 06:33:32 -- common/autotest_common.sh@862 -- # return 0 00:08:22.094 06:33:32 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:22.352 I/O targets: 00:08:22.353 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:08:22.353 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:08:22.353 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:22.353 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:22.353 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:22.353 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:22.353 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:22.353 00:08:22.353 00:08:22.353 CUnit - A unit testing framework for C - Version 2.1-3 00:08:22.353 http://cunit.sourceforge.net/ 00:08:22.353 00:08:22.353 00:08:22.353 Suite: bdevio tests on: Nvme3n1 00:08:22.353 Test: blockdev write read block ...passed 00:08:22.353 Test: blockdev write zeroes read block ...passed 00:08:22.353 Test: blockdev write zeroes read no split ...passed 00:08:22.353 Test: blockdev write zeroes read split ...passed 00:08:22.353 Test: blockdev write zeroes read split partial ...passed 00:08:22.353 Test: blockdev reset ...[2024-11-28 06:33:32.985760] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:08:22.353 passed 00:08:22.353 Test: blockdev write read 8 blocks ...[2024-11-28 06:33:32.987619] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:22.353 passed 00:08:22.353 Test: blockdev write read size > 128k ...passed 00:08:22.353 Test: blockdev write read invalid size ...passed 00:08:22.353 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:22.353 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:22.353 Test: blockdev write read max offset ...passed 00:08:22.353 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:22.353 Test: blockdev writev readv 8 blocks ...passed 00:08:22.353 Test: blockdev writev readv 30 x 1block ...passed 00:08:22.353 Test: blockdev writev readv block ...passed 00:08:22.353 Test: blockdev writev readv size > 128k ...passed 00:08:22.353 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:22.353 Test: blockdev comparev and writev ...[2024-11-28 06:33:32.992637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bb806000 len:0x1000 00:08:22.353 [2024-11-28 06:33:32.992697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:22.353 passed 00:08:22.353 Test: blockdev nvme passthru rw ...passed 00:08:22.353 Test: blockdev nvme passthru vendor specific ...passed 00:08:22.353 Test: blockdev nvme admin passthru ...[2024-11-28 06:33:32.993292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:22.353 [2024-11-28 06:33:32.993316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:22.353 passed 00:08:22.353 Test: blockdev copy ...passed 00:08:22.353 Suite: bdevio tests on: Nvme2n3 00:08:22.353 Test: blockdev write read block ...passed 00:08:22.353 Test: blockdev write zeroes read block ...passed 00:08:22.353 Test: blockdev write zeroes read no split ...passed 00:08:22.353 Test: blockdev write zeroes read split ...passed 00:08:22.353 Test: blockdev write zeroes read split partial ...passed 00:08:22.353 Test: blockdev reset ...[2024-11-28 06:33:33.082451] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:22.353 [2024-11-28 06:33:33.084548] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:22.353 passed 00:08:22.353 Test: blockdev write read 8 blocks ...passed 00:08:22.353 Test: blockdev write read size > 128k ...passed 00:08:22.353 Test: blockdev write read invalid size ...passed 00:08:22.353 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:22.353 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:22.353 Test: blockdev write read max offset ...passed 00:08:22.353 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:22.353 Test: blockdev writev readv 8 blocks ...passed 00:08:22.353 Test: blockdev writev readv 30 x 1block ...passed 00:08:22.353 Test: blockdev writev readv block ...passed 00:08:22.353 Test: blockdev writev readv size > 128k ...passed 00:08:22.353 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:22.353 Test: blockdev comparev and writev ...[2024-11-28 06:33:33.089176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2afc06000 len:0x1000 00:08:22.353 [2024-11-28 06:33:33.089217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:22.353 passed 00:08:22.353 Test: blockdev nvme passthru rw ...passed 00:08:22.353 Test: blockdev nvme passthru vendor specific ...passed 00:08:22.353 Test: blockdev nvme admin passthru ...[2024-11-28 06:33:33.089686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:22.353 [2024-11-28 06:33:33.089723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:22.353 passed 00:08:22.353 Test: blockdev copy ...passed 00:08:22.353 Suite: bdevio tests on: Nvme2n2 00:08:22.353 Test: blockdev write read block ...passed 00:08:22.611 Test: blockdev write zeroes read block ...passed 00:08:22.611 Test: blockdev write zeroes read no split ...passed 00:08:22.611 Test: blockdev write zeroes read split ...passed 00:08:22.611 Test: blockdev write zeroes read split partial ...passed 00:08:22.611 Test: blockdev reset ...[2024-11-28 06:33:33.178920] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:22.611 [2024-11-28 06:33:33.181016] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:22.611 passed 00:08:22.611 Test: blockdev write read 8 blocks ...passed 00:08:22.611 Test: blockdev write read size > 128k ...passed 00:08:22.611 Test: blockdev write read invalid size ...passed 00:08:22.611 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:22.611 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:22.611 Test: blockdev write read max offset ...passed 00:08:22.611 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:22.611 Test: blockdev writev readv 8 blocks ...passed 00:08:22.611 Test: blockdev writev readv 30 x 1block ...passed 00:08:22.611 Test: blockdev writev readv block ...passed 00:08:22.611 Test: blockdev writev readv size > 128k ...passed 00:08:22.611 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:22.611 Test: blockdev comparev and writev ...passed 00:08:22.611 Test: blockdev nvme passthru rw ...[2024-11-28 06:33:33.185556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2afc02000 len:0x1000 00:08:22.611 [2024-11-28 06:33:33.185603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:22.612 passed 00:08:22.612 Test: blockdev nvme passthru vendor specific ...passed 00:08:22.612 Test: blockdev nvme admin passthru ...[2024-11-28 06:33:33.186147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:22.612 [2024-11-28 06:33:33.186168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:22.612 passed 00:08:22.612 Test: blockdev copy ...passed 00:08:22.612 Suite: bdevio tests on: Nvme2n1 00:08:22.612 Test: blockdev write read block ...passed 00:08:22.612 Test: blockdev write zeroes read block ...passed 00:08:22.612 Test: blockdev write zeroes read no split ...passed 00:08:22.612 Test: blockdev write zeroes read split ...passed 00:08:22.612 Test: blockdev write zeroes read split partial ...passed 00:08:22.612 Test: blockdev reset ...[2024-11-28 06:33:33.295238] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:22.612 passed 00:08:22.612 Test: blockdev write read 8 blocks ...[2024-11-28 06:33:33.297233] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:22.612 passed 00:08:22.612 Test: blockdev write read size > 128k ...passed 00:08:22.612 Test: blockdev write read invalid size ...passed 00:08:22.612 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:22.612 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:22.612 Test: blockdev write read max offset ...passed 00:08:22.612 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:22.612 Test: blockdev writev readv 8 blocks ...passed 00:08:22.612 Test: blockdev writev readv 30 x 1block ...passed 00:08:22.612 Test: blockdev writev readv block ...passed 00:08:22.612 Test: blockdev writev readv size > 128k ...passed 00:08:22.612 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:22.612 Test: blockdev comparev and writev ...[2024-11-28 06:33:33.301338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bb80d000 len:0x1000 00:08:22.612 [2024-11-28 06:33:33.301379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:22.612 passed 00:08:22.612 Test: blockdev nvme passthru rw ...passed 00:08:22.612 Test: blockdev nvme passthru vendor specific ...[2024-11-28 06:33:33.301770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:22.612 passed 00:08:22.612 Test: blockdev nvme admin passthru ...[2024-11-28 06:33:33.301792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:22.612 passed 00:08:22.612 Test: blockdev copy ...passed 00:08:22.612 Suite: bdevio tests on: Nvme1n1 00:08:22.612 Test: blockdev write read block ...passed 00:08:22.871 Test: blockdev write zeroes read block ...passed 00:08:22.871 Test: blockdev write zeroes read no split ...passed 00:08:22.871 Test: blockdev write zeroes read split ...passed 00:08:22.871 Test: blockdev write zeroes read split partial ...passed 00:08:22.871 Test: blockdev reset ...[2024-11-28 06:33:33.448217] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:08:22.871 passed 00:08:22.871 Test: blockdev write read 8 blocks ...[2024-11-28 06:33:33.449908] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:22.871 passed 00:08:22.871 Test: blockdev write read size > 128k ...passed 00:08:22.871 Test: blockdev write read invalid size ...passed 00:08:22.871 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:22.871 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:22.871 Test: blockdev write read max offset ...passed 00:08:22.871 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:22.871 Test: blockdev writev readv 8 blocks ...passed 00:08:22.871 Test: blockdev writev readv 30 x 1block ...passed 00:08:22.871 Test: blockdev writev readv block ...passed 00:08:22.871 Test: blockdev writev readv size > 128k ...passed 00:08:22.871 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:22.871 Test: blockdev comparev and writev ...[2024-11-28 06:33:33.453833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1c36000 len:0x1000 00:08:22.871 [2024-11-28 06:33:33.453880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:22.871 passed 00:08:22.871 Test: blockdev nvme passthru rw ...passed 00:08:22.871 Test: blockdev nvme passthru vendor specific ...passed 00:08:22.871 Test: blockdev nvme admin passthru ...[2024-11-28 06:33:33.454384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:22.871 [2024-11-28 06:33:33.454414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:22.871 passed 00:08:22.871 Test: blockdev copy ...passed 00:08:22.871 Suite: bdevio tests on: Nvme0n1p2 00:08:22.871 Test: blockdev write read block ...passed 00:08:22.871 Test: blockdev write zeroes read block ...passed 00:08:22.871 Test: blockdev write zeroes read no split ...passed 00:08:22.871 Test: blockdev write zeroes read split ...passed 00:08:22.871 Test: blockdev write zeroes read split partial ...passed 00:08:22.871 Test: blockdev reset ...[2024-11-28 06:33:33.475463] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:08:22.871 passed 00:08:22.871 Test: blockdev write read 8 blocks ...[2024-11-28 06:33:33.477104] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:22.871 passed 00:08:22.871 Test: blockdev write read size > 128k ...passed 00:08:22.871 Test: blockdev write read invalid size ...passed 00:08:22.871 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:22.871 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:22.871 Test: blockdev write read max offset ...passed 00:08:22.871 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:22.871 Test: blockdev writev readv 8 blocks ...passed 00:08:22.871 Test: blockdev writev readv 30 x 1block ...passed 00:08:22.871 Test: blockdev writev readv block ...passed 00:08:22.871 Test: blockdev writev readv size > 128k ...passed 00:08:22.871 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:22.871 Test: blockdev comparev and writev ...[2024-11-28 06:33:33.480812] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:08:22.871 separate metadata which is not supported yet. 00:08:22.871 passed 00:08:22.871 Test: blockdev nvme passthru rw ...passed 00:08:22.871 Test: blockdev nvme passthru vendor specific ...passed 00:08:22.871 Test: blockdev nvme admin passthru ...passed 00:08:22.871 Test: blockdev copy ...passed 00:08:22.871 Suite: bdevio tests on: Nvme0n1p1 00:08:22.871 Test: blockdev write read block ...passed 00:08:22.872 Test: blockdev write zeroes read block ...passed 00:08:22.872 Test: blockdev write zeroes read no split ...passed 00:08:22.872 Test: blockdev write zeroes read split ...passed 00:08:22.872 Test: blockdev write zeroes read split partial ...passed 00:08:22.872 Test: blockdev reset ...[2024-11-28 06:33:33.501972] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:08:22.872 passed 00:08:22.872 Test: blockdev write read 8 blocks ...[2024-11-28 06:33:33.503556] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:22.872 passed 00:08:22.872 Test: blockdev write read size > 128k ...passed 00:08:22.872 Test: blockdev write read invalid size ...passed 00:08:22.872 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:22.872 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:22.872 Test: blockdev write read max offset ...passed 00:08:22.872 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:22.872 Test: blockdev writev readv 8 blocks ...passed 00:08:22.872 Test: blockdev writev readv 30 x 1block ...passed 00:08:22.872 Test: blockdev writev readv block ...passed 00:08:22.872 Test: blockdev writev readv size > 128k ...passed 00:08:22.872 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:22.872 Test: blockdev comparev and writev ...[2024-11-28 06:33:33.507662] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:08:22.872 separate metadata which is not supported yet. 00:08:22.872 passed 00:08:22.872 Test: blockdev nvme passthru rw ...passed 00:08:22.872 Test: blockdev nvme passthru vendor specific ...passed 00:08:22.872 Test: blockdev nvme admin passthru ...passed 00:08:22.872 Test: blockdev copy ...passed 00:08:22.872 00:08:22.872 Run Summary: Type Total Ran Passed Failed Inactive 00:08:22.872 suites 7 7 n/a 0 0 00:08:22.872 tests 161 161 161 0 0 00:08:22.872 asserts 1006 1006 1006 0 n/a 00:08:22.872 00:08:22.872 Elapsed time = 1.249 seconds 00:08:22.872 0 00:08:22.872 06:33:33 -- bdev/blockdev.sh@293 -- # killprocess 73410 00:08:22.872 06:33:33 -- common/autotest_common.sh@936 -- # '[' -z 73410 ']' 00:08:22.872 06:33:33 -- common/autotest_common.sh@940 -- # kill -0 73410 00:08:22.872 06:33:33 -- common/autotest_common.sh@941 -- # uname 00:08:22.872 06:33:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:22.872 06:33:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73410 00:08:22.872 06:33:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:22.872 06:33:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:22.872 killing process with pid 73410 00:08:22.872 06:33:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73410' 00:08:22.872 06:33:33 -- common/autotest_common.sh@955 -- # kill 73410 00:08:22.872 06:33:33 -- common/autotest_common.sh@960 -- # wait 73410 00:08:23.134 06:33:33 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:08:23.134 00:08:23.134 real 0m1.788s 00:08:23.134 user 0m4.271s 00:08:23.134 sys 0m0.269s 00:08:23.134 06:33:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:23.134 06:33:33 -- common/autotest_common.sh@10 -- # set +x 00:08:23.134 ************************************ 00:08:23.134 END TEST bdev_bounds 00:08:23.134 ************************************ 00:08:23.134 06:33:33 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:23.134 06:33:33 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:08:23.134 06:33:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:23.134 06:33:33 -- common/autotest_common.sh@10 -- # set +x 00:08:23.134 ************************************ 00:08:23.134 START TEST bdev_nbd 00:08:23.134 ************************************ 00:08:23.134 06:33:33 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:23.134 06:33:33 -- bdev/blockdev.sh@298 -- # uname -s 00:08:23.134 06:33:33 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:08:23.134 06:33:33 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:23.134 06:33:33 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:23.134 06:33:33 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:23.134 06:33:33 -- bdev/blockdev.sh@302 -- # local bdev_all 00:08:23.134 06:33:33 -- bdev/blockdev.sh@303 -- # local bdev_num=7 00:08:23.134 06:33:33 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:08:23.134 06:33:33 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:23.134 06:33:33 -- bdev/blockdev.sh@309 -- # local nbd_all 00:08:23.134 06:33:33 -- bdev/blockdev.sh@310 -- # bdev_num=7 00:08:23.134 06:33:33 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:23.134 06:33:33 -- bdev/blockdev.sh@312 -- # local nbd_list 00:08:23.134 06:33:33 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:23.134 06:33:33 -- bdev/blockdev.sh@313 -- # local bdev_list 00:08:23.134 06:33:33 -- bdev/blockdev.sh@316 -- # nbd_pid=73464 00:08:23.134 06:33:33 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:23.134 06:33:33 -- bdev/blockdev.sh@318 -- # waitforlisten 73464 /var/tmp/spdk-nbd.sock 00:08:23.134 06:33:33 -- common/autotest_common.sh@829 -- # '[' -z 73464 ']' 00:08:23.134 06:33:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:23.134 06:33:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:23.134 06:33:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:23.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:23.134 06:33:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:23.134 06:33:33 -- common/autotest_common.sh@10 -- # set +x 00:08:23.134 06:33:33 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:23.134 [2024-11-28 06:33:33.833150] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:23.134 [2024-11-28 06:33:33.833257] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.394 [2024-11-28 06:33:33.969841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.394 [2024-11-28 06:33:34.010625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.967 06:33:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:23.967 06:33:34 -- common/autotest_common.sh@862 -- # return 0 00:08:23.967 06:33:34 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:23.967 06:33:34 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:23.967 06:33:34 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:23.967 06:33:34 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:23.967 06:33:34 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:23.967 06:33:34 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:23.967 06:33:34 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:23.967 06:33:34 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:23.967 06:33:34 -- bdev/nbd_common.sh@24 -- # local i 00:08:23.967 06:33:34 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:23.967 06:33:34 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:23.967 06:33:34 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:23.967 06:33:34 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:08:24.228 06:33:34 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:24.228 06:33:34 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:24.228 06:33:34 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:24.228 06:33:34 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:24.228 06:33:34 -- common/autotest_common.sh@867 -- # local i 00:08:24.228 06:33:34 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:24.228 06:33:34 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:24.228 06:33:34 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:24.228 06:33:34 -- common/autotest_common.sh@871 -- # break 00:08:24.228 06:33:34 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:24.228 06:33:34 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:24.228 06:33:34 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:24.228 1+0 records in 00:08:24.228 1+0 records out 00:08:24.228 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355756 s, 11.5 MB/s 00:08:24.228 06:33:34 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.228 06:33:34 -- common/autotest_common.sh@884 -- # size=4096 00:08:24.228 06:33:34 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.228 06:33:34 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:24.228 06:33:34 -- common/autotest_common.sh@887 -- # return 0 00:08:24.228 06:33:34 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:24.228 06:33:34 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:24.228 06:33:34 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:08:24.488 06:33:35 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:24.488 06:33:35 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:24.488 06:33:35 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:24.488 06:33:35 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:24.488 06:33:35 -- common/autotest_common.sh@867 -- # local i 00:08:24.488 06:33:35 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:24.488 06:33:35 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:24.488 06:33:35 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:24.488 06:33:35 -- common/autotest_common.sh@871 -- # break 00:08:24.488 06:33:35 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:24.488 06:33:35 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:24.488 06:33:35 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:24.488 1+0 records in 00:08:24.488 1+0 records out 00:08:24.488 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311286 s, 13.2 MB/s 00:08:24.488 06:33:35 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.488 06:33:35 -- common/autotest_common.sh@884 -- # size=4096 00:08:24.488 06:33:35 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.488 06:33:35 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:24.488 06:33:35 -- common/autotest_common.sh@887 -- # return 0 00:08:24.488 06:33:35 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:24.488 06:33:35 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:24.488 06:33:35 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:24.748 06:33:35 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:24.748 06:33:35 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:24.748 06:33:35 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:24.748 06:33:35 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:08:24.748 06:33:35 -- common/autotest_common.sh@867 -- # local i 00:08:24.748 06:33:35 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:24.748 06:33:35 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:24.748 06:33:35 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:08:24.748 06:33:35 -- common/autotest_common.sh@871 -- # break 00:08:24.748 06:33:35 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:24.748 06:33:35 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:24.748 06:33:35 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:24.748 1+0 records in 00:08:24.748 1+0 records out 00:08:24.748 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423366 s, 9.7 MB/s 00:08:24.748 06:33:35 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.748 06:33:35 -- common/autotest_common.sh@884 -- # size=4096 00:08:24.748 06:33:35 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.748 06:33:35 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:24.748 06:33:35 -- common/autotest_common.sh@887 -- # return 0 00:08:24.748 06:33:35 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:24.748 06:33:35 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:24.748 06:33:35 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:25.009 06:33:35 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:25.009 06:33:35 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:25.009 06:33:35 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:25.009 06:33:35 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:08:25.009 06:33:35 -- common/autotest_common.sh@867 -- # local i 00:08:25.009 06:33:35 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:25.009 06:33:35 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:25.009 06:33:35 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:08:25.009 06:33:35 -- common/autotest_common.sh@871 -- # break 00:08:25.009 06:33:35 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:25.009 06:33:35 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:25.009 06:33:35 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:25.009 1+0 records in 00:08:25.009 1+0 records out 00:08:25.009 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447264 s, 9.2 MB/s 00:08:25.009 06:33:35 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.009 06:33:35 -- common/autotest_common.sh@884 -- # size=4096 00:08:25.009 06:33:35 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.009 06:33:35 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:25.009 06:33:35 -- common/autotest_common.sh@887 -- # return 0 00:08:25.009 06:33:35 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:25.009 06:33:35 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:25.009 06:33:35 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:25.009 06:33:35 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:25.009 06:33:35 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:25.009 06:33:35 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:25.009 06:33:35 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:08:25.009 06:33:35 -- common/autotest_common.sh@867 -- # local i 00:08:25.009 06:33:35 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:25.009 06:33:35 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:25.009 06:33:35 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:08:25.009 06:33:35 -- common/autotest_common.sh@871 -- # break 00:08:25.009 06:33:35 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:25.009 06:33:35 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:25.009 06:33:35 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:25.009 1+0 records in 00:08:25.009 1+0 records out 00:08:25.009 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363147 s, 11.3 MB/s 00:08:25.009 06:33:35 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.009 06:33:35 -- common/autotest_common.sh@884 -- # size=4096 00:08:25.009 06:33:35 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.009 06:33:35 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:25.009 06:33:35 -- common/autotest_common.sh@887 -- # return 0 00:08:25.009 06:33:35 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:25.009 06:33:35 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:25.009 06:33:35 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:25.270 06:33:35 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:25.270 06:33:35 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:25.270 06:33:35 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:25.270 06:33:35 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:08:25.270 06:33:35 -- common/autotest_common.sh@867 -- # local i 00:08:25.270 06:33:35 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:25.270 06:33:35 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:25.270 06:33:35 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:08:25.270 06:33:35 -- common/autotest_common.sh@871 -- # break 00:08:25.270 06:33:35 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:25.270 06:33:35 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:25.270 06:33:35 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:25.270 1+0 records in 00:08:25.270 1+0 records out 00:08:25.270 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319797 s, 12.8 MB/s 00:08:25.270 06:33:35 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.270 06:33:35 -- common/autotest_common.sh@884 -- # size=4096 00:08:25.270 06:33:35 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.270 06:33:35 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:25.270 06:33:35 -- common/autotest_common.sh@887 -- # return 0 00:08:25.270 06:33:35 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:25.270 06:33:35 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:25.270 06:33:35 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:25.531 06:33:36 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:25.531 06:33:36 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:25.531 06:33:36 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:25.531 06:33:36 -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:08:25.531 06:33:36 -- common/autotest_common.sh@867 -- # local i 00:08:25.531 06:33:36 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:25.531 06:33:36 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:25.531 06:33:36 -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:08:25.531 06:33:36 -- common/autotest_common.sh@871 -- # break 00:08:25.531 06:33:36 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:25.531 06:33:36 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:25.531 06:33:36 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:25.531 1+0 records in 00:08:25.531 1+0 records out 00:08:25.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461291 s, 8.9 MB/s 00:08:25.531 06:33:36 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.531 06:33:36 -- common/autotest_common.sh@884 -- # size=4096 00:08:25.531 06:33:36 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.531 06:33:36 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:25.531 06:33:36 -- common/autotest_common.sh@887 -- # return 0 00:08:25.531 06:33:36 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:25.531 06:33:36 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:25.531 06:33:36 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:25.792 06:33:36 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:25.792 { 00:08:25.792 "nbd_device": "/dev/nbd0", 00:08:25.792 "bdev_name": "Nvme0n1p1" 00:08:25.792 }, 00:08:25.792 { 00:08:25.792 "nbd_device": "/dev/nbd1", 00:08:25.792 "bdev_name": "Nvme0n1p2" 00:08:25.792 }, 00:08:25.792 { 00:08:25.792 "nbd_device": "/dev/nbd2", 00:08:25.792 "bdev_name": "Nvme1n1" 00:08:25.792 }, 00:08:25.792 { 00:08:25.792 "nbd_device": "/dev/nbd3", 00:08:25.792 "bdev_name": "Nvme2n1" 00:08:25.792 }, 00:08:25.792 { 00:08:25.792 "nbd_device": "/dev/nbd4", 00:08:25.792 "bdev_name": "Nvme2n2" 00:08:25.792 }, 00:08:25.792 { 00:08:25.792 "nbd_device": "/dev/nbd5", 00:08:25.792 "bdev_name": "Nvme2n3" 00:08:25.792 }, 00:08:25.792 { 00:08:25.792 "nbd_device": "/dev/nbd6", 00:08:25.792 "bdev_name": "Nvme3n1" 00:08:25.792 } 00:08:25.792 ]' 00:08:25.792 06:33:36 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:25.792 06:33:36 -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:25.792 { 00:08:25.792 "nbd_device": "/dev/nbd0", 00:08:25.792 "bdev_name": "Nvme0n1p1" 00:08:25.792 }, 00:08:25.792 { 00:08:25.792 "nbd_device": "/dev/nbd1", 00:08:25.792 "bdev_name": "Nvme0n1p2" 00:08:25.792 }, 00:08:25.792 { 00:08:25.792 "nbd_device": "/dev/nbd2", 00:08:25.792 "bdev_name": "Nvme1n1" 00:08:25.792 }, 00:08:25.792 { 00:08:25.792 "nbd_device": "/dev/nbd3", 00:08:25.792 "bdev_name": "Nvme2n1" 00:08:25.792 }, 00:08:25.792 { 00:08:25.792 "nbd_device": "/dev/nbd4", 00:08:25.792 "bdev_name": "Nvme2n2" 00:08:25.792 }, 00:08:25.792 { 00:08:25.792 "nbd_device": "/dev/nbd5", 00:08:25.792 "bdev_name": "Nvme2n3" 00:08:25.792 }, 00:08:25.792 { 00:08:25.792 "nbd_device": "/dev/nbd6", 00:08:25.792 "bdev_name": "Nvme3n1" 00:08:25.792 } 00:08:25.792 ]' 00:08:25.792 06:33:36 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:25.792 06:33:36 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:25.792 06:33:36 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.792 06:33:36 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:25.792 06:33:36 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:25.792 06:33:36 -- bdev/nbd_common.sh@51 -- # local i 00:08:25.792 06:33:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.792 06:33:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:26.054 06:33:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:26.054 06:33:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:26.054 06:33:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:26.054 06:33:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.054 06:33:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.054 06:33:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:26.054 06:33:36 -- bdev/nbd_common.sh@41 -- # break 00:08:26.054 06:33:36 -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.054 06:33:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:26.054 06:33:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:26.054 06:33:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:26.054 06:33:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:26.054 06:33:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:26.054 06:33:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.054 06:33:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.054 06:33:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:26.054 06:33:36 -- bdev/nbd_common.sh@41 -- # break 00:08:26.054 06:33:36 -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.054 06:33:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:26.054 06:33:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:26.314 06:33:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:26.314 06:33:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:26.314 06:33:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:26.314 06:33:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.314 06:33:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.314 06:33:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:26.314 06:33:37 -- bdev/nbd_common.sh@41 -- # break 00:08:26.314 06:33:37 -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.314 06:33:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:26.314 06:33:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:26.572 06:33:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:26.572 06:33:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:26.572 06:33:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:26.572 06:33:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.572 06:33:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.572 06:33:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:26.572 06:33:37 -- bdev/nbd_common.sh@41 -- # break 00:08:26.573 06:33:37 -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.573 06:33:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:26.573 06:33:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:26.831 06:33:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:26.831 06:33:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:26.831 06:33:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:26.831 06:33:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.831 06:33:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.831 06:33:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:26.831 06:33:37 -- bdev/nbd_common.sh@41 -- # break 00:08:26.831 06:33:37 -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.831 06:33:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:26.831 06:33:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:26.831 06:33:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:26.831 06:33:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:26.831 06:33:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:26.831 06:33:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.831 06:33:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.831 06:33:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:26.831 06:33:37 -- bdev/nbd_common.sh@41 -- # break 00:08:26.831 06:33:37 -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.831 06:33:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:26.831 06:33:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:27.089 06:33:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:27.089 06:33:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:27.089 06:33:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:27.089 06:33:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:27.089 06:33:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:27.089 06:33:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:27.089 06:33:37 -- bdev/nbd_common.sh@41 -- # break 00:08:27.089 06:33:37 -- bdev/nbd_common.sh@45 -- # return 0 00:08:27.089 06:33:37 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:27.089 06:33:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:27.089 06:33:37 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:27.347 06:33:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:27.347 06:33:37 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:27.347 06:33:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:27.347 06:33:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:27.347 06:33:38 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:27.347 06:33:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:27.347 06:33:38 -- bdev/nbd_common.sh@65 -- # true 00:08:27.347 06:33:38 -- bdev/nbd_common.sh@65 -- # count=0 00:08:27.347 06:33:38 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:27.347 06:33:38 -- bdev/nbd_common.sh@122 -- # count=0 00:08:27.347 06:33:38 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:27.347 06:33:38 -- bdev/nbd_common.sh@127 -- # return 0 00:08:27.347 06:33:38 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:27.347 06:33:38 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:27.347 06:33:38 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:27.347 06:33:38 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:27.347 06:33:38 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:27.347 06:33:38 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:27.347 06:33:38 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:27.347 06:33:38 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:27.347 06:33:38 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:27.347 06:33:38 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:27.347 06:33:38 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:27.347 06:33:38 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:27.347 06:33:38 -- bdev/nbd_common.sh@12 -- # local i 00:08:27.347 06:33:38 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:27.347 06:33:38 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:27.347 06:33:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:08:27.605 /dev/nbd0 00:08:27.605 06:33:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:27.605 06:33:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:27.605 06:33:38 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:27.605 06:33:38 -- common/autotest_common.sh@867 -- # local i 00:08:27.605 06:33:38 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:27.605 06:33:38 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:27.605 06:33:38 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:27.605 06:33:38 -- common/autotest_common.sh@871 -- # break 00:08:27.605 06:33:38 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:27.605 06:33:38 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:27.605 06:33:38 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:27.605 1+0 records in 00:08:27.605 1+0 records out 00:08:27.605 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290137 s, 14.1 MB/s 00:08:27.605 06:33:38 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.605 06:33:38 -- common/autotest_common.sh@884 -- # size=4096 00:08:27.605 06:33:38 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.605 06:33:38 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:27.605 06:33:38 -- common/autotest_common.sh@887 -- # return 0 00:08:27.605 06:33:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:27.605 06:33:38 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:27.605 06:33:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:08:27.863 /dev/nbd1 00:08:27.863 06:33:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:27.863 06:33:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:27.863 06:33:38 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:27.863 06:33:38 -- common/autotest_common.sh@867 -- # local i 00:08:27.863 06:33:38 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:27.863 06:33:38 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:27.863 06:33:38 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:27.863 06:33:38 -- common/autotest_common.sh@871 -- # break 00:08:27.863 06:33:38 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:27.863 06:33:38 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:27.863 06:33:38 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:27.863 1+0 records in 00:08:27.863 1+0 records out 00:08:27.863 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260856 s, 15.7 MB/s 00:08:27.863 06:33:38 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.863 06:33:38 -- common/autotest_common.sh@884 -- # size=4096 00:08:27.863 06:33:38 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.863 06:33:38 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:27.863 06:33:38 -- common/autotest_common.sh@887 -- # return 0 00:08:27.863 06:33:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:27.863 06:33:38 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:27.863 06:33:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:08:28.121 /dev/nbd10 00:08:28.121 06:33:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:28.121 06:33:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:28.121 06:33:38 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:08:28.121 06:33:38 -- common/autotest_common.sh@867 -- # local i 00:08:28.121 06:33:38 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:28.121 06:33:38 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:28.121 06:33:38 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:08:28.121 06:33:38 -- common/autotest_common.sh@871 -- # break 00:08:28.121 06:33:38 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:28.121 06:33:38 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:28.121 06:33:38 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:28.121 1+0 records in 00:08:28.121 1+0 records out 00:08:28.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341198 s, 12.0 MB/s 00:08:28.121 06:33:38 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.121 06:33:38 -- common/autotest_common.sh@884 -- # size=4096 00:08:28.121 06:33:38 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.121 06:33:38 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:28.121 06:33:38 -- common/autotest_common.sh@887 -- # return 0 00:08:28.121 06:33:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:28.121 06:33:38 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:28.121 06:33:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:28.121 /dev/nbd11 00:08:28.121 06:33:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:28.121 06:33:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:28.121 06:33:38 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:08:28.121 06:33:38 -- common/autotest_common.sh@867 -- # local i 00:08:28.121 06:33:38 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:28.121 06:33:38 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:28.121 06:33:38 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:08:28.121 06:33:38 -- common/autotest_common.sh@871 -- # break 00:08:28.121 06:33:38 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:28.121 06:33:38 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:28.121 06:33:38 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:28.380 1+0 records in 00:08:28.380 1+0 records out 00:08:28.380 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041306 s, 9.9 MB/s 00:08:28.380 06:33:38 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.380 06:33:38 -- common/autotest_common.sh@884 -- # size=4096 00:08:28.380 06:33:38 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.380 06:33:38 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:28.380 06:33:38 -- common/autotest_common.sh@887 -- # return 0 00:08:28.380 06:33:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:28.380 06:33:38 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:28.380 06:33:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:28.380 /dev/nbd12 00:08:28.380 06:33:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:28.380 06:33:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:28.380 06:33:39 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:08:28.380 06:33:39 -- common/autotest_common.sh@867 -- # local i 00:08:28.380 06:33:39 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:28.380 06:33:39 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:28.380 06:33:39 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:08:28.380 06:33:39 -- common/autotest_common.sh@871 -- # break 00:08:28.380 06:33:39 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:28.380 06:33:39 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:28.380 06:33:39 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:28.380 1+0 records in 00:08:28.380 1+0 records out 00:08:28.380 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329508 s, 12.4 MB/s 00:08:28.380 06:33:39 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.380 06:33:39 -- common/autotest_common.sh@884 -- # size=4096 00:08:28.380 06:33:39 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.380 06:33:39 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:28.380 06:33:39 -- common/autotest_common.sh@887 -- # return 0 00:08:28.380 06:33:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:28.380 06:33:39 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:28.380 06:33:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:28.639 /dev/nbd13 00:08:28.639 06:33:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:28.639 06:33:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:28.639 06:33:39 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:08:28.639 06:33:39 -- common/autotest_common.sh@867 -- # local i 00:08:28.639 06:33:39 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:28.639 06:33:39 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:28.639 06:33:39 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:08:28.639 06:33:39 -- common/autotest_common.sh@871 -- # break 00:08:28.639 06:33:39 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:28.639 06:33:39 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:28.639 06:33:39 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:28.639 1+0 records in 00:08:28.639 1+0 records out 00:08:28.639 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000966453 s, 4.2 MB/s 00:08:28.639 06:33:39 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.639 06:33:39 -- common/autotest_common.sh@884 -- # size=4096 00:08:28.639 06:33:39 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.639 06:33:39 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:28.639 06:33:39 -- common/autotest_common.sh@887 -- # return 0 00:08:28.639 06:33:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:28.639 06:33:39 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:28.639 06:33:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:28.897 /dev/nbd14 00:08:28.897 06:33:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:28.897 06:33:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:28.897 06:33:39 -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:08:28.897 06:33:39 -- common/autotest_common.sh@867 -- # local i 00:08:28.897 06:33:39 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:28.897 06:33:39 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:28.897 06:33:39 -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:08:28.897 06:33:39 -- common/autotest_common.sh@871 -- # break 00:08:28.897 06:33:39 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:28.897 06:33:39 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:28.897 06:33:39 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:28.897 1+0 records in 00:08:28.897 1+0 records out 00:08:28.897 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120692 s, 3.4 MB/s 00:08:28.897 06:33:39 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.897 06:33:39 -- common/autotest_common.sh@884 -- # size=4096 00:08:28.897 06:33:39 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.897 06:33:39 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:28.897 06:33:39 -- common/autotest_common.sh@887 -- # return 0 00:08:28.897 06:33:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:28.897 06:33:39 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:28.897 06:33:39 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:28.897 06:33:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:28.897 06:33:39 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:29.156 06:33:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:29.156 { 00:08:29.156 "nbd_device": "/dev/nbd0", 00:08:29.156 "bdev_name": "Nvme0n1p1" 00:08:29.156 }, 00:08:29.156 { 00:08:29.156 "nbd_device": "/dev/nbd1", 00:08:29.156 "bdev_name": "Nvme0n1p2" 00:08:29.156 }, 00:08:29.156 { 00:08:29.156 "nbd_device": "/dev/nbd10", 00:08:29.156 "bdev_name": "Nvme1n1" 00:08:29.156 }, 00:08:29.156 { 00:08:29.156 "nbd_device": "/dev/nbd11", 00:08:29.156 "bdev_name": "Nvme2n1" 00:08:29.156 }, 00:08:29.156 { 00:08:29.156 "nbd_device": "/dev/nbd12", 00:08:29.156 "bdev_name": "Nvme2n2" 00:08:29.156 }, 00:08:29.156 { 00:08:29.156 "nbd_device": "/dev/nbd13", 00:08:29.156 "bdev_name": "Nvme2n3" 00:08:29.156 }, 00:08:29.156 { 00:08:29.156 "nbd_device": "/dev/nbd14", 00:08:29.156 "bdev_name": "Nvme3n1" 00:08:29.156 } 00:08:29.156 ]' 00:08:29.156 06:33:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:29.156 06:33:39 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:29.156 { 00:08:29.156 "nbd_device": "/dev/nbd0", 00:08:29.156 "bdev_name": "Nvme0n1p1" 00:08:29.156 }, 00:08:29.156 { 00:08:29.156 "nbd_device": "/dev/nbd1", 00:08:29.156 "bdev_name": "Nvme0n1p2" 00:08:29.156 }, 00:08:29.156 { 00:08:29.156 "nbd_device": "/dev/nbd10", 00:08:29.156 "bdev_name": "Nvme1n1" 00:08:29.156 }, 00:08:29.156 { 00:08:29.156 "nbd_device": "/dev/nbd11", 00:08:29.156 "bdev_name": "Nvme2n1" 00:08:29.156 }, 00:08:29.156 { 00:08:29.156 "nbd_device": "/dev/nbd12", 00:08:29.156 "bdev_name": "Nvme2n2" 00:08:29.156 }, 00:08:29.156 { 00:08:29.156 "nbd_device": "/dev/nbd13", 00:08:29.156 "bdev_name": "Nvme2n3" 00:08:29.156 }, 00:08:29.156 { 00:08:29.156 "nbd_device": "/dev/nbd14", 00:08:29.156 "bdev_name": "Nvme3n1" 00:08:29.156 } 00:08:29.156 ]' 00:08:29.156 06:33:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:29.156 /dev/nbd1 00:08:29.156 /dev/nbd10 00:08:29.156 /dev/nbd11 00:08:29.156 /dev/nbd12 00:08:29.156 /dev/nbd13 00:08:29.156 /dev/nbd14' 00:08:29.156 06:33:39 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:29.156 /dev/nbd1 00:08:29.156 /dev/nbd10 00:08:29.156 /dev/nbd11 00:08:29.156 /dev/nbd12 00:08:29.156 /dev/nbd13 00:08:29.156 /dev/nbd14' 00:08:29.156 06:33:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:29.156 06:33:39 -- bdev/nbd_common.sh@65 -- # count=7 00:08:29.156 06:33:39 -- bdev/nbd_common.sh@66 -- # echo 7 00:08:29.156 06:33:39 -- bdev/nbd_common.sh@95 -- # count=7 00:08:29.156 06:33:39 -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:29.156 06:33:39 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:29.156 06:33:39 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:29.156 06:33:39 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:29.156 06:33:39 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:29.156 06:33:39 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:29.156 06:33:39 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:29.156 06:33:39 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:29.156 256+0 records in 00:08:29.156 256+0 records out 00:08:29.156 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105563 s, 99.3 MB/s 00:08:29.156 06:33:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.156 06:33:39 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:29.414 256+0 records in 00:08:29.414 256+0 records out 00:08:29.414 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162356 s, 6.5 MB/s 00:08:29.414 06:33:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.414 06:33:39 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:29.414 256+0 records in 00:08:29.414 256+0 records out 00:08:29.414 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121017 s, 8.7 MB/s 00:08:29.414 06:33:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.414 06:33:40 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:29.672 256+0 records in 00:08:29.673 256+0 records out 00:08:29.673 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170945 s, 6.1 MB/s 00:08:29.673 06:33:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.673 06:33:40 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:29.673 256+0 records in 00:08:29.673 256+0 records out 00:08:29.673 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121228 s, 8.6 MB/s 00:08:29.673 06:33:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.673 06:33:40 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:29.673 256+0 records in 00:08:29.673 256+0 records out 00:08:29.673 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0722553 s, 14.5 MB/s 00:08:29.673 06:33:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.673 06:33:40 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:29.931 256+0 records in 00:08:29.931 256+0 records out 00:08:29.931 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0714255 s, 14.7 MB/s 00:08:29.931 06:33:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.931 06:33:40 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:29.931 256+0 records in 00:08:29.931 256+0 records out 00:08:29.931 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.081389 s, 12.9 MB/s 00:08:29.931 06:33:40 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:29.931 06:33:40 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:29.931 06:33:40 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:29.931 06:33:40 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:29.931 06:33:40 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:29.931 06:33:40 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:29.931 06:33:40 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:29.932 06:33:40 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.932 06:33:40 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:29.932 06:33:40 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.932 06:33:40 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:29.932 06:33:40 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.932 06:33:40 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:29.932 06:33:40 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.932 06:33:40 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:29.932 06:33:40 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.932 06:33:40 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:29.932 06:33:40 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.932 06:33:40 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:29.932 06:33:40 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.932 06:33:40 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:29.932 06:33:40 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:29.932 06:33:40 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:29.932 06:33:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:29.932 06:33:40 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:29.932 06:33:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:29.932 06:33:40 -- bdev/nbd_common.sh@51 -- # local i 00:08:29.932 06:33:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:29.932 06:33:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:30.190 06:33:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:30.190 06:33:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:30.190 06:33:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:30.190 06:33:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.190 06:33:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.190 06:33:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:30.190 06:33:40 -- bdev/nbd_common.sh@41 -- # break 00:08:30.190 06:33:40 -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.190 06:33:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.190 06:33:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:30.448 06:33:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:30.448 06:33:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:30.448 06:33:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:30.448 06:33:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.448 06:33:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.448 06:33:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:30.448 06:33:41 -- bdev/nbd_common.sh@41 -- # break 00:08:30.448 06:33:41 -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.448 06:33:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.448 06:33:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:30.448 06:33:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:30.448 06:33:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:30.448 06:33:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:30.448 06:33:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.448 06:33:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.448 06:33:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:30.705 06:33:41 -- bdev/nbd_common.sh@41 -- # break 00:08:30.705 06:33:41 -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.705 06:33:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.705 06:33:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:30.705 06:33:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:30.705 06:33:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:30.705 06:33:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:30.705 06:33:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.705 06:33:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.705 06:33:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:30.705 06:33:41 -- bdev/nbd_common.sh@41 -- # break 00:08:30.705 06:33:41 -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.705 06:33:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.705 06:33:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:30.964 06:33:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:30.964 06:33:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:30.964 06:33:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:30.964 06:33:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.964 06:33:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.964 06:33:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:30.964 06:33:41 -- bdev/nbd_common.sh@41 -- # break 00:08:30.964 06:33:41 -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.964 06:33:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.964 06:33:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:31.222 06:33:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:31.222 06:33:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:31.222 06:33:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:31.222 06:33:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:31.222 06:33:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:31.222 06:33:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:31.222 06:33:41 -- bdev/nbd_common.sh@41 -- # break 00:08:31.222 06:33:41 -- bdev/nbd_common.sh@45 -- # return 0 00:08:31.222 06:33:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:31.222 06:33:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:31.480 06:33:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:31.480 06:33:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:31.480 06:33:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:31.480 06:33:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:31.480 06:33:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:31.480 06:33:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:31.480 06:33:42 -- bdev/nbd_common.sh@41 -- # break 00:08:31.480 06:33:42 -- bdev/nbd_common.sh@45 -- # return 0 00:08:31.480 06:33:42 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:31.480 06:33:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.480 06:33:42 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:31.480 06:33:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:31.480 06:33:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:31.480 06:33:42 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:31.480 06:33:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:31.480 06:33:42 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:31.480 06:33:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:31.480 06:33:42 -- bdev/nbd_common.sh@65 -- # true 00:08:31.480 06:33:42 -- bdev/nbd_common.sh@65 -- # count=0 00:08:31.480 06:33:42 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:31.480 06:33:42 -- bdev/nbd_common.sh@104 -- # count=0 00:08:31.480 06:33:42 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:31.480 06:33:42 -- bdev/nbd_common.sh@109 -- # return 0 00:08:31.480 06:33:42 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:31.480 06:33:42 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.480 06:33:42 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:31.480 06:33:42 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:31.480 06:33:42 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:31.480 06:33:42 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:31.777 malloc_lvol_verify 00:08:31.777 06:33:42 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:32.063 07b24ca7-5f13-4f77-8708-6e3c22e4db3e 00:08:32.063 06:33:42 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:32.063 81ac7834-1921-4114-a707-ea79fbace892 00:08:32.063 06:33:42 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:32.321 /dev/nbd0 00:08:32.321 06:33:43 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:32.321 mke2fs 1.47.0 (5-Feb-2023) 00:08:32.321 Discarding device blocks: 0/4096 done 00:08:32.321 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:32.321 00:08:32.321 Allocating group tables: 0/1 done 00:08:32.321 Writing inode tables: 0/1 done 00:08:32.321 Creating journal (1024 blocks): done 00:08:32.321 Writing superblocks and filesystem accounting information: 0/1 done 00:08:32.321 00:08:32.321 06:33:43 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:32.321 06:33:43 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:32.321 06:33:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:32.321 06:33:43 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:32.321 06:33:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:32.321 06:33:43 -- bdev/nbd_common.sh@51 -- # local i 00:08:32.321 06:33:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:32.321 06:33:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:32.580 06:33:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:32.580 06:33:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:32.580 06:33:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:32.580 06:33:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.580 06:33:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.580 06:33:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:32.580 06:33:43 -- bdev/nbd_common.sh@41 -- # break 00:08:32.580 06:33:43 -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.580 06:33:43 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:32.580 06:33:43 -- bdev/nbd_common.sh@147 -- # return 0 00:08:32.580 06:33:43 -- bdev/blockdev.sh@324 -- # killprocess 73464 00:08:32.580 06:33:43 -- common/autotest_common.sh@936 -- # '[' -z 73464 ']' 00:08:32.580 06:33:43 -- common/autotest_common.sh@940 -- # kill -0 73464 00:08:32.580 06:33:43 -- common/autotest_common.sh@941 -- # uname 00:08:32.580 06:33:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:32.580 06:33:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73464 00:08:32.580 06:33:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:32.580 06:33:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:32.580 06:33:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73464' 00:08:32.580 killing process with pid 73464 00:08:32.580 06:33:43 -- common/autotest_common.sh@955 -- # kill 73464 00:08:32.580 06:33:43 -- common/autotest_common.sh@960 -- # wait 73464 00:08:32.838 06:33:43 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:08:32.838 00:08:32.838 real 0m9.693s 00:08:32.838 user 0m13.824s 00:08:32.838 sys 0m3.428s 00:08:32.838 06:33:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:32.838 ************************************ 00:08:32.838 END TEST bdev_nbd 00:08:32.838 ************************************ 00:08:32.838 06:33:43 -- common/autotest_common.sh@10 -- # set +x 00:08:32.838 skipping fio tests on NVMe due to multi-ns failures. 00:08:32.838 06:33:43 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:08:32.838 06:33:43 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:08:32.838 06:33:43 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:08:32.838 06:33:43 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:32.838 06:33:43 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:32.838 06:33:43 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:32.839 06:33:43 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:08:32.839 06:33:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.839 06:33:43 -- common/autotest_common.sh@10 -- # set +x 00:08:32.839 ************************************ 00:08:32.839 START TEST bdev_verify 00:08:32.839 ************************************ 00:08:32.839 06:33:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:32.839 [2024-11-28 06:33:43.578794] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:32.839 [2024-11-28 06:33:43.578898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73862 ] 00:08:33.097 [2024-11-28 06:33:43.711743] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:33.097 [2024-11-28 06:33:43.750396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.097 [2024-11-28 06:33:43.750445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.663 Running I/O for 5 seconds... 00:08:38.924 00:08:38.924 Latency(us) 00:08:38.924 [2024-11-28T06:33:49.694Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.924 [2024-11-28T06:33:49.694Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:38.924 Verification LBA range: start 0x0 length 0x5e800 00:08:38.924 Nvme0n1p1 : 5.05 2739.64 10.70 0.00 0.00 46573.56 9981.64 59688.17 00:08:38.924 [2024-11-28T06:33:49.694Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:38.924 Verification LBA range: start 0x5e800 length 0x5e800 00:08:38.924 Nvme0n1p1 : 5.07 2347.47 9.17 0.00 0.00 54136.90 9023.80 54041.99 00:08:38.924 [2024-11-28T06:33:49.694Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:38.924 Verification LBA range: start 0x0 length 0x5e7ff 00:08:38.924 Nvme0n1p2 : 5.05 2738.89 10.70 0.00 0.00 46559.16 9477.51 57268.38 00:08:38.924 [2024-11-28T06:33:49.694Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:38.924 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:08:38.924 Nvme0n1p2 : 5.07 2346.82 9.17 0.00 0.00 54107.18 9578.34 54445.29 00:08:38.924 [2024-11-28T06:33:49.694Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:38.924 Verification LBA range: start 0x0 length 0xa0000 00:08:38.924 Nvme1n1 : 5.05 2738.19 10.70 0.00 0.00 46528.44 10082.46 54041.99 00:08:38.924 [2024-11-28T06:33:49.694Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:38.924 Verification LBA range: start 0xa0000 length 0xa0000 00:08:38.924 Nvme1n1 : 5.07 2346.26 9.17 0.00 0.00 54066.48 9628.75 53638.70 00:08:38.924 [2024-11-28T06:33:49.694Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:38.924 Verification LBA range: start 0x0 length 0x80000 00:08:38.924 Nvme2n1 : 5.05 2744.45 10.72 0.00 0.00 46408.02 2810.49 49404.06 00:08:38.924 [2024-11-28T06:33:49.694Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:38.924 Verification LBA range: start 0x80000 length 0x80000 00:08:38.924 Nvme2n1 : 5.07 2345.67 9.16 0.00 0.00 54027.11 9880.81 53638.70 00:08:38.924 [2024-11-28T06:33:49.694Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:38.924 Verification LBA range: start 0x0 length 0x80000 00:08:38.924 Nvme2n2 : 5.06 2743.40 10.72 0.00 0.00 46382.10 4133.81 47790.87 00:08:38.924 [2024-11-28T06:33:49.694Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:38.924 Verification LBA range: start 0x80000 length 0x80000 00:08:38.924 Nvme2n2 : 5.05 2344.35 9.16 0.00 0.00 54439.36 6175.51 62914.56 00:08:38.924 [2024-11-28T06:33:49.694Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:38.924 Verification LBA range: start 0x0 length 0x80000 00:08:38.924 Nvme2n3 : 5.06 2742.14 10.71 0.00 0.00 46353.75 5646.18 44766.13 00:08:38.924 [2024-11-28T06:33:49.694Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:38.924 Verification LBA range: start 0x80000 length 0x80000 00:08:38.924 Nvme2n3 : 5.06 2349.88 9.18 0.00 0.00 54214.55 5923.45 53638.70 00:08:38.924 [2024-11-28T06:33:49.694Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:38.924 Verification LBA range: start 0x0 length 0x20000 00:08:38.924 Nvme3n1 : 5.06 2740.96 10.71 0.00 0.00 46315.66 6805.66 46177.67 00:08:38.924 [2024-11-28T06:33:49.694Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:38.924 Verification LBA range: start 0x20000 length 0x20000 00:08:38.924 Nvme3n1 : 5.06 2348.11 9.17 0.00 0.00 54179.76 9074.22 52832.10 00:08:38.924 [2024-11-28T06:33:49.694Z] =================================================================================================================== 00:08:38.924 [2024-11-28T06:33:49.694Z] Total : 35616.22 139.13 0.00 0.00 50011.55 2810.49 62914.56 00:08:47.040 00:08:47.040 real 0m13.085s 00:08:47.040 user 0m25.300s 00:08:47.040 sys 0m0.317s 00:08:47.040 06:33:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:47.040 06:33:56 -- common/autotest_common.sh@10 -- # set +x 00:08:47.040 ************************************ 00:08:47.040 END TEST bdev_verify 00:08:47.040 ************************************ 00:08:47.040 06:33:56 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:47.040 06:33:56 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:08:47.040 06:33:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:47.040 06:33:56 -- common/autotest_common.sh@10 -- # set +x 00:08:47.040 ************************************ 00:08:47.040 START TEST bdev_verify_big_io 00:08:47.040 ************************************ 00:08:47.040 06:33:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:47.040 [2024-11-28 06:33:56.716150] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:47.040 [2024-11-28 06:33:56.716259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73997 ] 00:08:47.040 [2024-11-28 06:33:56.853201] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:47.040 [2024-11-28 06:33:56.894492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.040 [2024-11-28 06:33:56.894534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.040 Running I/O for 5 seconds... 00:08:52.388 00:08:52.388 Latency(us) 00:08:52.388 [2024-11-28T06:34:03.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.388 [2024-11-28T06:34:03.158Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:52.388 Verification LBA range: start 0x0 length 0x5e80 00:08:52.388 Nvme0n1p1 : 5.39 250.11 15.63 0.00 0.00 502479.02 35490.26 751748.33 00:08:52.388 [2024-11-28T06:34:03.158Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:52.388 Verification LBA range: start 0x5e80 length 0x5e80 00:08:52.388 Nvme0n1p1 : 5.36 226.29 14.14 0.00 0.00 556374.81 50009.01 767880.27 00:08:52.388 [2024-11-28T06:34:03.158Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:52.388 Verification LBA range: start 0x0 length 0x5e7f 00:08:52.388 Nvme0n1p2 : 5.42 256.62 16.04 0.00 0.00 486753.77 24298.73 700126.13 00:08:52.388 [2024-11-28T06:34:03.158Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:52.388 Verification LBA range: start 0x5e7f length 0x5e7f 00:08:52.388 Nvme0n1p2 : 5.37 226.23 14.14 0.00 0.00 548253.20 50210.66 703352.52 00:08:52.388 [2024-11-28T06:34:03.158Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:52.388 Verification LBA range: start 0x0 length 0xa000 00:08:52.388 Nvme1n1 : 5.42 256.52 16.03 0.00 0.00 480395.55 25609.45 651730.31 00:08:52.388 [2024-11-28T06:34:03.158Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:52.388 Verification LBA range: start 0xa000 length 0xa000 00:08:52.388 Nvme1n1 : 5.37 226.17 14.14 0.00 0.00 540748.32 50815.61 645277.54 00:08:52.388 [2024-11-28T06:34:03.158Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:52.388 Verification LBA range: start 0x0 length 0x8000 00:08:52.388 Nvme2n1 : 5.42 256.43 16.03 0.00 0.00 473893.96 26416.05 600108.11 00:08:52.388 [2024-11-28T06:34:03.158Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:52.388 Verification LBA range: start 0x8000 length 0x8000 00:08:52.388 Nvme2n1 : 5.40 231.95 14.50 0.00 0.00 520511.38 32667.18 651730.31 00:08:52.388 [2024-11-28T06:34:03.158Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:52.388 Verification LBA range: start 0x0 length 0x8000 00:08:52.388 Nvme2n2 : 5.43 256.36 16.02 0.00 0.00 467366.81 27021.00 603334.50 00:08:52.388 [2024-11-28T06:34:03.158Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:52.388 Verification LBA range: start 0x8000 length 0x8000 00:08:52.388 Nvme2n2 : 5.42 239.21 14.95 0.00 0.00 499146.37 17946.78 571070.62 00:08:52.388 [2024-11-28T06:34:03.158Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:52.388 Verification LBA range: start 0x0 length 0x8000 00:08:52.388 Nvme2n3 : 5.45 262.48 16.41 0.00 0.00 450534.02 19660.80 609787.27 00:08:52.388 [2024-11-28T06:34:03.158Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:52.388 Verification LBA range: start 0x8000 length 0x8000 00:08:52.388 Nvme2n3 : 5.43 248.42 15.53 0.00 0.00 475286.47 6604.01 577523.40 00:08:52.388 [2024-11-28T06:34:03.158Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:52.388 Verification LBA range: start 0x0 length 0x2000 00:08:52.388 Nvme3n1 : 5.45 279.38 17.46 0.00 0.00 419139.60 2180.33 822728.86 00:08:52.388 [2024-11-28T06:34:03.158Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:52.388 Verification LBA range: start 0x2000 length 0x2000 00:08:52.388 Nvme3n1 : 5.46 278.34 17.40 0.00 0.00 419040.04 425.35 955010.76 00:08:52.388 [2024-11-28T06:34:03.158Z] =================================================================================================================== 00:08:52.388 [2024-11-28T06:34:03.158Z] Total : 3494.51 218.41 0.00 0.00 485558.84 425.35 955010.76 00:08:53.332 00:08:53.332 real 0m7.404s 00:08:53.332 user 0m14.067s 00:08:53.332 sys 0m0.248s 00:08:53.332 ************************************ 00:08:53.332 END TEST bdev_verify_big_io 00:08:53.332 ************************************ 00:08:53.332 06:34:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:53.332 06:34:04 -- common/autotest_common.sh@10 -- # set +x 00:08:53.593 06:34:04 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:53.593 06:34:04 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:53.593 06:34:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:53.593 06:34:04 -- common/autotest_common.sh@10 -- # set +x 00:08:53.593 ************************************ 00:08:53.593 START TEST bdev_write_zeroes 00:08:53.593 ************************************ 00:08:53.593 06:34:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:53.593 [2024-11-28 06:34:04.172128] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:53.593 [2024-11-28 06:34:04.172242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74095 ] 00:08:53.593 [2024-11-28 06:34:04.302374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.593 [2024-11-28 06:34:04.333491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.162 Running I/O for 1 seconds... 00:08:55.095 00:08:55.095 Latency(us) 00:08:55.095 [2024-11-28T06:34:05.865Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.095 [2024-11-28T06:34:05.865Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:55.095 Nvme0n1p1 : 1.02 5977.35 23.35 0.00 0.00 21346.45 7057.72 158899.59 00:08:55.095 [2024-11-28T06:34:05.865Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:55.095 Nvme0n1p2 : 1.02 5934.86 23.18 0.00 0.00 21453.13 9830.40 150027.03 00:08:55.095 [2024-11-28T06:34:05.865Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:55.095 Nvme1n1 : 1.02 6147.68 24.01 0.00 0.00 20655.55 11342.77 150833.62 00:08:55.095 [2024-11-28T06:34:05.865Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:55.095 Nvme2n1 : 1.02 6012.39 23.49 0.00 0.00 21038.39 11191.53 152446.82 00:08:55.095 [2024-11-28T06:34:05.865Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:55.095 Nvme2n2 : 1.02 6065.91 23.69 0.00 0.00 20795.79 10384.94 153253.42 00:08:55.095 [2024-11-28T06:34:05.865Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:55.095 Nvme2n3 : 1.03 6117.17 23.90 0.00 0.00 20675.69 9477.51 153253.42 00:08:55.095 [2024-11-28T06:34:05.865Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:55.095 Nvme3n1 : 1.03 6110.28 23.87 0.00 0.00 20669.51 9628.75 153253.42 00:08:55.095 [2024-11-28T06:34:05.865Z] =================================================================================================================== 00:08:55.095 [2024-11-28T06:34:05.865Z] Total : 42365.65 165.49 0.00 0.00 20943.30 7057.72 158899.59 00:08:55.353 00:08:55.353 real 0m1.820s 00:08:55.353 user 0m1.544s 00:08:55.353 sys 0m0.163s 00:08:55.353 06:34:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:55.353 ************************************ 00:08:55.353 END TEST bdev_write_zeroes 00:08:55.353 ************************************ 00:08:55.353 06:34:05 -- common/autotest_common.sh@10 -- # set +x 00:08:55.353 06:34:05 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:55.353 06:34:05 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:55.353 06:34:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:55.353 06:34:05 -- common/autotest_common.sh@10 -- # set +x 00:08:55.353 ************************************ 00:08:55.353 START TEST bdev_json_nonenclosed 00:08:55.353 ************************************ 00:08:55.353 06:34:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:55.353 [2024-11-28 06:34:06.041462] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:55.353 [2024-11-28 06:34:06.041573] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74137 ] 00:08:55.610 [2024-11-28 06:34:06.174941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.610 [2024-11-28 06:34:06.210919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.610 [2024-11-28 06:34:06.211065] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:55.611 [2024-11-28 06:34:06.211089] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:55.611 00:08:55.611 real 0m0.312s 00:08:55.611 user 0m0.126s 00:08:55.611 sys 0m0.083s 00:08:55.611 06:34:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:55.611 06:34:06 -- common/autotest_common.sh@10 -- # set +x 00:08:55.611 ************************************ 00:08:55.611 END TEST bdev_json_nonenclosed 00:08:55.611 ************************************ 00:08:55.611 06:34:06 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:55.611 06:34:06 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:55.611 06:34:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:55.611 06:34:06 -- common/autotest_common.sh@10 -- # set +x 00:08:55.611 ************************************ 00:08:55.611 START TEST bdev_json_nonarray 00:08:55.611 ************************************ 00:08:55.611 06:34:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:55.869 [2024-11-28 06:34:06.404551] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:55.869 [2024-11-28 06:34:06.404669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74163 ] 00:08:55.869 [2024-11-28 06:34:06.532586] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.869 [2024-11-28 06:34:06.563524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.869 [2024-11-28 06:34:06.563682] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:55.869 [2024-11-28 06:34:06.563714] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:55.869 00:08:55.869 real 0m0.291s 00:08:55.869 user 0m0.111s 00:08:55.869 sys 0m0.077s 00:08:55.869 06:34:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:56.131 ************************************ 00:08:56.131 END TEST bdev_json_nonarray 00:08:56.131 ************************************ 00:08:56.131 06:34:06 -- common/autotest_common.sh@10 -- # set +x 00:08:56.131 06:34:06 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:08:56.131 06:34:06 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:08:56.131 06:34:06 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:56.131 06:34:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:56.131 06:34:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:56.131 06:34:06 -- common/autotest_common.sh@10 -- # set +x 00:08:56.131 ************************************ 00:08:56.131 START TEST bdev_gpt_uuid 00:08:56.131 ************************************ 00:08:56.131 06:34:06 -- common/autotest_common.sh@1114 -- # bdev_gpt_uuid 00:08:56.131 06:34:06 -- bdev/blockdev.sh@612 -- # local bdev 00:08:56.131 06:34:06 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:08:56.131 06:34:06 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=74183 00:08:56.131 06:34:06 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:56.131 06:34:06 -- bdev/blockdev.sh@47 -- # waitforlisten 74183 00:08:56.131 06:34:06 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:56.131 06:34:06 -- common/autotest_common.sh@829 -- # '[' -z 74183 ']' 00:08:56.131 06:34:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.131 06:34:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:56.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.131 06:34:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.131 06:34:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:56.131 06:34:06 -- common/autotest_common.sh@10 -- # set +x 00:08:56.131 [2024-11-28 06:34:06.761825] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:56.131 [2024-11-28 06:34:06.761942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74183 ] 00:08:56.131 [2024-11-28 06:34:06.896450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.393 [2024-11-28 06:34:06.930800] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:56.393 [2024-11-28 06:34:06.930990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.965 06:34:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:56.965 06:34:07 -- common/autotest_common.sh@862 -- # return 0 00:08:56.965 06:34:07 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:56.965 06:34:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.965 06:34:07 -- common/autotest_common.sh@10 -- # set +x 00:08:57.225 Some configs were skipped because the RPC state that can call them passed over. 00:08:57.225 06:34:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.225 06:34:07 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:08:57.225 06:34:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.225 06:34:07 -- common/autotest_common.sh@10 -- # set +x 00:08:57.225 06:34:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.225 06:34:07 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:57.225 06:34:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.225 06:34:07 -- common/autotest_common.sh@10 -- # set +x 00:08:57.225 06:34:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.225 06:34:07 -- bdev/blockdev.sh@619 -- # bdev='[ 00:08:57.225 { 00:08:57.225 "name": "Nvme0n1p1", 00:08:57.225 "aliases": [ 00:08:57.225 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:57.225 ], 00:08:57.225 "product_name": "GPT Disk", 00:08:57.225 "block_size": 4096, 00:08:57.225 "num_blocks": 774144, 00:08:57.225 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:57.225 "md_size": 64, 00:08:57.225 "md_interleave": false, 00:08:57.225 "dif_type": 0, 00:08:57.225 "assigned_rate_limits": { 00:08:57.225 "rw_ios_per_sec": 0, 00:08:57.225 "rw_mbytes_per_sec": 0, 00:08:57.225 "r_mbytes_per_sec": 0, 00:08:57.225 "w_mbytes_per_sec": 0 00:08:57.225 }, 00:08:57.225 "claimed": false, 00:08:57.225 "zoned": false, 00:08:57.225 "supported_io_types": { 00:08:57.225 "read": true, 00:08:57.225 "write": true, 00:08:57.225 "unmap": true, 00:08:57.225 "write_zeroes": true, 00:08:57.225 "flush": true, 00:08:57.225 "reset": true, 00:08:57.225 "compare": true, 00:08:57.225 "compare_and_write": false, 00:08:57.225 "abort": true, 00:08:57.225 "nvme_admin": false, 00:08:57.225 "nvme_io": false 00:08:57.225 }, 00:08:57.225 "driver_specific": { 00:08:57.225 "gpt": { 00:08:57.225 "base_bdev": "Nvme0n1", 00:08:57.225 "offset_blocks": 256, 00:08:57.225 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:57.225 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:57.225 "partition_name": "SPDK_TEST_first" 00:08:57.225 } 00:08:57.225 } 00:08:57.225 } 00:08:57.225 ]' 00:08:57.225 06:34:07 -- bdev/blockdev.sh@620 -- # jq -r length 00:08:57.225 06:34:07 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:08:57.225 06:34:07 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:08:57.486 06:34:07 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:57.486 06:34:08 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:57.486 06:34:08 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:57.486 06:34:08 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:57.486 06:34:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.486 06:34:08 -- common/autotest_common.sh@10 -- # set +x 00:08:57.486 06:34:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.486 06:34:08 -- bdev/blockdev.sh@624 -- # bdev='[ 00:08:57.486 { 00:08:57.486 "name": "Nvme0n1p2", 00:08:57.486 "aliases": [ 00:08:57.486 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:57.486 ], 00:08:57.486 "product_name": "GPT Disk", 00:08:57.486 "block_size": 4096, 00:08:57.486 "num_blocks": 774143, 00:08:57.486 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:57.486 "md_size": 64, 00:08:57.486 "md_interleave": false, 00:08:57.486 "dif_type": 0, 00:08:57.486 "assigned_rate_limits": { 00:08:57.486 "rw_ios_per_sec": 0, 00:08:57.486 "rw_mbytes_per_sec": 0, 00:08:57.486 "r_mbytes_per_sec": 0, 00:08:57.486 "w_mbytes_per_sec": 0 00:08:57.486 }, 00:08:57.486 "claimed": false, 00:08:57.486 "zoned": false, 00:08:57.486 "supported_io_types": { 00:08:57.486 "read": true, 00:08:57.486 "write": true, 00:08:57.486 "unmap": true, 00:08:57.486 "write_zeroes": true, 00:08:57.486 "flush": true, 00:08:57.486 "reset": true, 00:08:57.486 "compare": true, 00:08:57.486 "compare_and_write": false, 00:08:57.486 "abort": true, 00:08:57.487 "nvme_admin": false, 00:08:57.487 "nvme_io": false 00:08:57.487 }, 00:08:57.487 "driver_specific": { 00:08:57.487 "gpt": { 00:08:57.487 "base_bdev": "Nvme0n1", 00:08:57.487 "offset_blocks": 774400, 00:08:57.487 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:57.487 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:57.487 "partition_name": "SPDK_TEST_second" 00:08:57.487 } 00:08:57.487 } 00:08:57.487 } 00:08:57.487 ]' 00:08:57.487 06:34:08 -- bdev/blockdev.sh@625 -- # jq -r length 00:08:57.487 06:34:08 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:08:57.487 06:34:08 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:08:57.487 06:34:08 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:57.487 06:34:08 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:57.487 06:34:08 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:57.487 06:34:08 -- bdev/blockdev.sh@629 -- # killprocess 74183 00:08:57.487 06:34:08 -- common/autotest_common.sh@936 -- # '[' -z 74183 ']' 00:08:57.487 06:34:08 -- common/autotest_common.sh@940 -- # kill -0 74183 00:08:57.487 06:34:08 -- common/autotest_common.sh@941 -- # uname 00:08:57.487 06:34:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:57.487 06:34:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74183 00:08:57.487 06:34:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:57.487 killing process with pid 74183 00:08:57.487 06:34:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:57.487 06:34:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74183' 00:08:57.487 06:34:08 -- common/autotest_common.sh@955 -- # kill 74183 00:08:57.487 06:34:08 -- common/autotest_common.sh@960 -- # wait 74183 00:08:57.747 00:08:57.747 real 0m1.761s 00:08:57.747 user 0m1.957s 00:08:57.747 sys 0m0.308s 00:08:57.747 06:34:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:57.747 ************************************ 00:08:57.747 END TEST bdev_gpt_uuid 00:08:57.747 ************************************ 00:08:57.747 06:34:08 -- common/autotest_common.sh@10 -- # set +x 00:08:57.747 06:34:08 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:08:57.747 06:34:08 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:08:57.747 06:34:08 -- bdev/blockdev.sh@809 -- # cleanup 00:08:57.747 06:34:08 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:57.747 06:34:08 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:57.747 06:34:08 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:08:57.747 06:34:08 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:08:57.747 06:34:08 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:08:57.747 06:34:08 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:58.320 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:58.320 Waiting for block devices as requested 00:08:58.320 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:08:58.581 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:08:58.581 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:08:58.581 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:09:03.876 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:09:03.876 06:34:14 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme2n1 ]] 00:09:03.876 06:34:14 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme2n1 00:09:04.137 /dev/nvme2n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:04.137 /dev/nvme2n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:09:04.137 /dev/nvme2n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:04.137 /dev/nvme2n1: calling ioctl to re-read partition table: Success 00:09:04.137 06:34:14 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:09:04.137 00:09:04.137 real 0m54.492s 00:09:04.137 user 1m13.527s 00:09:04.137 sys 0m7.677s 00:09:04.137 06:34:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:04.137 ************************************ 00:09:04.137 END TEST blockdev_nvme_gpt 00:09:04.137 ************************************ 00:09:04.137 06:34:14 -- common/autotest_common.sh@10 -- # set +x 00:09:04.137 06:34:14 -- spdk/autotest.sh@209 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:04.137 06:34:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:04.137 06:34:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:04.137 06:34:14 -- common/autotest_common.sh@10 -- # set +x 00:09:04.137 ************************************ 00:09:04.137 START TEST nvme 00:09:04.137 ************************************ 00:09:04.137 06:34:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:04.137 * Looking for test storage... 00:09:04.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:04.137 06:34:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:04.137 06:34:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:04.138 06:34:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:04.399 06:34:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:04.399 06:34:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:04.399 06:34:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:04.399 06:34:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:04.399 06:34:14 -- scripts/common.sh@335 -- # IFS=.-: 00:09:04.399 06:34:14 -- scripts/common.sh@335 -- # read -ra ver1 00:09:04.399 06:34:14 -- scripts/common.sh@336 -- # IFS=.-: 00:09:04.399 06:34:14 -- scripts/common.sh@336 -- # read -ra ver2 00:09:04.399 06:34:14 -- scripts/common.sh@337 -- # local 'op=<' 00:09:04.399 06:34:14 -- scripts/common.sh@339 -- # ver1_l=2 00:09:04.399 06:34:14 -- scripts/common.sh@340 -- # ver2_l=1 00:09:04.399 06:34:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:04.399 06:34:14 -- scripts/common.sh@343 -- # case "$op" in 00:09:04.399 06:34:14 -- scripts/common.sh@344 -- # : 1 00:09:04.399 06:34:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:04.399 06:34:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:04.399 06:34:14 -- scripts/common.sh@364 -- # decimal 1 00:09:04.399 06:34:14 -- scripts/common.sh@352 -- # local d=1 00:09:04.399 06:34:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:04.399 06:34:14 -- scripts/common.sh@354 -- # echo 1 00:09:04.399 06:34:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:04.399 06:34:14 -- scripts/common.sh@365 -- # decimal 2 00:09:04.399 06:34:14 -- scripts/common.sh@352 -- # local d=2 00:09:04.399 06:34:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:04.399 06:34:14 -- scripts/common.sh@354 -- # echo 2 00:09:04.399 06:34:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:04.399 06:34:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:04.399 06:34:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:04.399 06:34:14 -- scripts/common.sh@367 -- # return 0 00:09:04.399 06:34:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:04.399 06:34:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:04.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.399 --rc genhtml_branch_coverage=1 00:09:04.399 --rc genhtml_function_coverage=1 00:09:04.399 --rc genhtml_legend=1 00:09:04.399 --rc geninfo_all_blocks=1 00:09:04.399 --rc geninfo_unexecuted_blocks=1 00:09:04.399 00:09:04.399 ' 00:09:04.399 06:34:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:04.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.399 --rc genhtml_branch_coverage=1 00:09:04.399 --rc genhtml_function_coverage=1 00:09:04.399 --rc genhtml_legend=1 00:09:04.399 --rc geninfo_all_blocks=1 00:09:04.399 --rc geninfo_unexecuted_blocks=1 00:09:04.399 00:09:04.399 ' 00:09:04.399 06:34:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:04.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.399 --rc genhtml_branch_coverage=1 00:09:04.399 --rc genhtml_function_coverage=1 00:09:04.399 --rc genhtml_legend=1 00:09:04.399 --rc geninfo_all_blocks=1 00:09:04.399 --rc geninfo_unexecuted_blocks=1 00:09:04.399 00:09:04.399 ' 00:09:04.399 06:34:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:04.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.399 --rc genhtml_branch_coverage=1 00:09:04.399 --rc genhtml_function_coverage=1 00:09:04.399 --rc genhtml_legend=1 00:09:04.399 --rc geninfo_all_blocks=1 00:09:04.399 --rc geninfo_unexecuted_blocks=1 00:09:04.399 00:09:04.399 ' 00:09:04.399 06:34:14 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:05.344 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:05.345 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:09:05.345 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:09:05.345 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:09:05.345 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:09:05.345 06:34:16 -- nvme/nvme.sh@79 -- # uname 00:09:05.345 06:34:16 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:05.345 06:34:16 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:05.345 06:34:16 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:05.345 06:34:16 -- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:05.345 06:34:16 -- common/autotest_common.sh@1054 -- # _randomize_va_space=2 00:09:05.345 06:34:16 -- common/autotest_common.sh@1055 -- # echo 0 00:09:05.345 06:34:16 -- common/autotest_common.sh@1057 -- # stubpid=74833 00:09:05.345 06:34:16 -- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes... 00:09:05.345 Waiting for stub to ready for secondary processes... 00:09:05.345 06:34:16 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:05.345 06:34:16 -- common/autotest_common.sh@1061 -- # [[ -e /proc/74833 ]] 00:09:05.345 06:34:16 -- common/autotest_common.sh@1062 -- # sleep 1s 00:09:05.345 06:34:16 -- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:05.646 [2024-11-28 06:34:16.138127] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:05.646 [2024-11-28 06:34:16.138281] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.587 [2024-11-28 06:34:17.103054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:06.587 06:34:17 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:06.587 06:34:17 -- common/autotest_common.sh@1061 -- # [[ -e /proc/74833 ]] 00:09:06.587 06:34:17 -- common/autotest_common.sh@1062 -- # sleep 1s 00:09:06.587 [2024-11-28 06:34:17.132635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.587 [2024-11-28 06:34:17.133016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:06.587 [2024-11-28 06:34:17.133094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.587 [2024-11-28 06:34:17.149040] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:06.587 [2024-11-28 06:34:17.161117] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:06.587 [2024-11-28 06:34:17.161395] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:06.587 [2024-11-28 06:34:17.166360] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:06.587 [2024-11-28 06:34:17.166752] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:06.587 [2024-11-28 06:34:17.167018] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:06.587 [2024-11-28 06:34:17.172402] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:06.587 [2024-11-28 06:34:17.172817] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:06.587 [2024-11-28 06:34:17.173103] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:06.587 [2024-11-28 06:34:17.177215] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:06.587 [2024-11-28 06:34:17.177417] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:06.587 [2024-11-28 06:34:17.177537] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:06.587 [2024-11-28 06:34:17.177638] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:06.587 [2024-11-28 06:34:17.177760] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:07.530 done. 00:09:07.530 06:34:18 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:07.530 06:34:18 -- common/autotest_common.sh@1064 -- # echo done. 00:09:07.530 06:34:18 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:07.530 06:34:18 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:09:07.530 06:34:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:07.530 06:34:18 -- common/autotest_common.sh@10 -- # set +x 00:09:07.530 ************************************ 00:09:07.530 START TEST nvme_reset 00:09:07.530 ************************************ 00:09:07.530 06:34:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:07.790 Initializing NVMe Controllers 00:09:07.790 Skipping QEMU NVMe SSD at 0000:00:07.0 00:09:07.790 Skipping QEMU NVMe SSD at 0000:00:09.0 00:09:07.790 Skipping QEMU NVMe SSD at 0000:00:06.0 00:09:07.790 Skipping QEMU NVMe SSD at 0000:00:08.0 00:09:07.790 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:07.790 00:09:07.790 real 0m0.174s 00:09:07.790 user 0m0.052s 00:09:07.790 sys 0m0.086s 00:09:07.790 06:34:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:07.790 ************************************ 00:09:07.790 END TEST nvme_reset 00:09:07.790 ************************************ 00:09:07.790 06:34:18 -- common/autotest_common.sh@10 -- # set +x 00:09:07.790 06:34:18 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:07.790 06:34:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:07.790 06:34:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:07.790 06:34:18 -- common/autotest_common.sh@10 -- # set +x 00:09:07.790 ************************************ 00:09:07.790 START TEST nvme_identify 00:09:07.790 ************************************ 00:09:07.790 06:34:18 -- common/autotest_common.sh@1114 -- # nvme_identify 00:09:07.790 06:34:18 -- nvme/nvme.sh@12 -- # bdfs=() 00:09:07.790 06:34:18 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:07.790 06:34:18 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:07.790 06:34:18 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:07.790 06:34:18 -- common/autotest_common.sh@1508 -- # bdfs=() 00:09:07.790 06:34:18 -- common/autotest_common.sh@1508 -- # local bdfs 00:09:07.790 06:34:18 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:07.790 06:34:18 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:09:07.790 06:34:18 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:07.790 06:34:18 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:09:07.790 06:34:18 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:09:07.790 06:34:18 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:08.054 [2024-11-28 06:34:18.592336] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:07.0] process 74875 terminated unexpected 00:09:08.054 ===================================================== 00:09:08.054 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:08.054 ===================================================== 00:09:08.054 Controller Capabilities/Features 00:09:08.054 ================================ 00:09:08.054 Vendor ID: 1b36 00:09:08.054 Subsystem Vendor ID: 1af4 00:09:08.054 Serial Number: 12341 00:09:08.054 Model Number: QEMU NVMe Ctrl 00:09:08.054 Firmware Version: 8.0.0 00:09:08.054 Recommended Arb Burst: 6 00:09:08.054 IEEE OUI Identifier: 00 54 52 00:09:08.054 Multi-path I/O 00:09:08.054 May have multiple subsystem ports: No 00:09:08.054 May have multiple controllers: No 00:09:08.054 Associated with SR-IOV VF: No 00:09:08.054 Max Data Transfer Size: 524288 00:09:08.054 Max Number of Namespaces: 256 00:09:08.054 Max Number of I/O Queues: 64 00:09:08.054 NVMe Specification Version (VS): 1.4 00:09:08.054 NVMe Specification Version (Identify): 1.4 00:09:08.054 Maximum Queue Entries: 2048 00:09:08.054 Contiguous Queues Required: Yes 00:09:08.054 Arbitration Mechanisms Supported 00:09:08.054 Weighted Round Robin: Not Supported 00:09:08.054 Vendor Specific: Not Supported 00:09:08.054 Reset Timeout: 7500 ms 00:09:08.054 Doorbell Stride: 4 bytes 00:09:08.054 NVM Subsystem Reset: Not Supported 00:09:08.054 Command Sets Supported 00:09:08.054 NVM Command Set: Supported 00:09:08.054 Boot Partition: Not Supported 00:09:08.054 Memory Page Size Minimum: 4096 bytes 00:09:08.054 Memory Page Size Maximum: 65536 bytes 00:09:08.054 Persistent Memory Region: Not Supported 00:09:08.054 Optional Asynchronous Events Supported 00:09:08.054 Namespace Attribute Notices: Supported 00:09:08.054 Firmware Activation Notices: Not Supported 00:09:08.054 ANA Change Notices: Not Supported 00:09:08.054 PLE Aggregate Log Change Notices: Not Supported 00:09:08.054 LBA Status Info Alert Notices: Not Supported 00:09:08.054 EGE Aggregate Log Change Notices: Not Supported 00:09:08.054 Normal NVM Subsystem Shutdown event: Not Supported 00:09:08.054 Zone Descriptor Change Notices: Not Supported 00:09:08.054 Discovery Log Change Notices: Not Supported 00:09:08.054 Controller Attributes 00:09:08.054 128-bit Host Identifier: Not Supported 00:09:08.054 Non-Operational Permissive Mode: Not Supported 00:09:08.054 NVM Sets: Not Supported 00:09:08.054 Read Recovery Levels: Not Supported 00:09:08.054 Endurance Groups: Not Supported 00:09:08.054 Predictable Latency Mode: Not Supported 00:09:08.054 Traffic Based Keep ALive: Not Supported 00:09:08.054 Namespace Granularity: Not Supported 00:09:08.054 SQ Associations: Not Supported 00:09:08.054 UUID List: Not Supported 00:09:08.054 Multi-Domain Subsystem: Not Supported 00:09:08.054 Fixed Capacity Management: Not Supported 00:09:08.054 Variable Capacity Management: Not Supported 00:09:08.054 Delete Endurance Group: Not Supported 00:09:08.054 Delete NVM Set: Not Supported 00:09:08.054 Extended LBA Formats Supported: Supported 00:09:08.054 Flexible Data Placement Supported: Not Supported 00:09:08.054 00:09:08.054 Controller Memory Buffer Support 00:09:08.054 ================================ 00:09:08.054 Supported: No 00:09:08.054 00:09:08.054 Persistent Memory Region Support 00:09:08.054 ================================ 00:09:08.054 Supported: No 00:09:08.054 00:09:08.054 Admin Command Set Attributes 00:09:08.054 ============================ 00:09:08.054 Security Send/Receive: Not Supported 00:09:08.054 Format NVM: Supported 00:09:08.054 Firmware Activate/Download: Not Supported 00:09:08.054 Namespace Management: Supported 00:09:08.054 Device Self-Test: Not Supported 00:09:08.054 Directives: Supported 00:09:08.054 NVMe-MI: Not Supported 00:09:08.054 Virtualization Management: Not Supported 00:09:08.054 Doorbell Buffer Config: Supported 00:09:08.054 Get LBA Status Capability: Not Supported 00:09:08.054 Command & Feature Lockdown Capability: Not Supported 00:09:08.054 Abort Command Limit: 4 00:09:08.054 Async Event Request Limit: 4 00:09:08.054 Number of Firmware Slots: N/A 00:09:08.054 Firmware Slot 1 Read-Only: N/A 00:09:08.054 Firmware Activation Without Reset: N/A 00:09:08.054 Multiple Update Detection Support: N/A 00:09:08.054 Firmware Update Granularity: No Information Provided 00:09:08.054 Per-Namespace SMART Log: Yes 00:09:08.054 Asymmetric Namespace Access Log Page: Not Supported 00:09:08.054 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:08.054 Command Effects Log Page: Supported 00:09:08.054 Get Log Page Extended Data: Supported 00:09:08.054 Telemetry Log Pages: Not Supported 00:09:08.054 Persistent Event Log Pages: Not Supported 00:09:08.054 Supported Log Pages Log Page: May Support 00:09:08.054 Commands Supported & Effects Log Page: Not Supported 00:09:08.054 Feature Identifiers & Effects Log Page:May Support 00:09:08.054 NVMe-MI Commands & Effects Log Page: May Support 00:09:08.054 Data Area 4 for Telemetry Log: Not Supported 00:09:08.054 Error Log Page Entries Supported: 1 00:09:08.054 Keep Alive: Not Supported 00:09:08.054 00:09:08.054 NVM Command Set Attributes 00:09:08.054 ========================== 00:09:08.054 Submission Queue Entry Size 00:09:08.054 Max: 64 00:09:08.054 Min: 64 00:09:08.054 Completion Queue Entry Size 00:09:08.054 Max: 16 00:09:08.054 Min: 16 00:09:08.054 Number of Namespaces: 256 00:09:08.054 Compare Command: Supported 00:09:08.054 Write Uncorrectable Command: Not Supported 00:09:08.054 Dataset Management Command: Supported 00:09:08.054 Write Zeroes Command: Supported 00:09:08.054 Set Features Save Field: Supported 00:09:08.054 Reservations: Not Supported 00:09:08.054 Timestamp: Supported 00:09:08.054 Copy: Supported 00:09:08.054 Volatile Write Cache: Present 00:09:08.054 Atomic Write Unit (Normal): 1 00:09:08.054 Atomic Write Unit (PFail): 1 00:09:08.054 Atomic Compare & Write Unit: 1 00:09:08.054 Fused Compare & Write: Not Supported 00:09:08.054 Scatter-Gather List 00:09:08.054 SGL Command Set: Supported 00:09:08.054 SGL Keyed: Not Supported 00:09:08.054 SGL Bit Bucket Descriptor: Not Supported 00:09:08.054 SGL Metadata Pointer: Not Supported 00:09:08.054 Oversized SGL: Not Supported 00:09:08.054 SGL Metadata Address: Not Supported 00:09:08.054 SGL Offset: Not Supported 00:09:08.054 Transport SGL Data Block: Not Supported 00:09:08.054 Replay Protected Memory Block: Not Supported 00:09:08.054 00:09:08.054 Firmware Slot Information 00:09:08.054 ========================= 00:09:08.054 Active slot: 1 00:09:08.054 Slot 1 Firmware Revision: 1.0 00:09:08.054 00:09:08.054 00:09:08.054 Commands Supported and Effects 00:09:08.054 ============================== 00:09:08.054 Admin Commands 00:09:08.054 -------------- 00:09:08.054 Delete I/O Submission Queue (00h): Supported 00:09:08.054 Create I/O Submission Queue (01h): Supported 00:09:08.054 Get Log Page (02h): Supported 00:09:08.054 Delete I/O Completion Queue (04h): Supported 00:09:08.054 Create I/O Completion Queue (05h): Supported 00:09:08.054 Identify (06h): Supported 00:09:08.054 Abort (08h): Supported 00:09:08.054 Set Features (09h): Supported 00:09:08.054 Get Features (0Ah): Supported 00:09:08.054 Asynchronous Event Request (0Ch): Supported 00:09:08.054 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:08.054 Directive Send (19h): Supported 00:09:08.054 Directive Receive (1Ah): Supported 00:09:08.054 Virtualization Management (1Ch): Supported 00:09:08.054 Doorbell Buffer Config (7Ch): Supported 00:09:08.055 Format NVM (80h): Supported LBA-Change 00:09:08.055 I/O Commands 00:09:08.055 ------------ 00:09:08.055 Flush (00h): Supported LBA-Change 00:09:08.055 Write (01h): Supported LBA-Change 00:09:08.055 Read (02h): Supported 00:09:08.055 Compare (05h): Supported 00:09:08.055 Write Zeroes (08h): Supported LBA-Change 00:09:08.055 Dataset Management (09h): Supported LBA-Change 00:09:08.055 Unknown (0Ch): Supported 00:09:08.055 Unknown (12h): Supported 00:09:08.055 Copy (19h): Supported LBA-Change 00:09:08.055 Unknown (1Dh): Supported LBA-Change 00:09:08.055 00:09:08.055 Error Log 00:09:08.055 ========= 00:09:08.055 00:09:08.055 Arbitration 00:09:08.055 =========== 00:09:08.055 Arbitration Burst: no limit 00:09:08.055 00:09:08.055 Power Management 00:09:08.055 ================ 00:09:08.055 Number of Power States: 1 00:09:08.055 Current Power State: Power State #0 00:09:08.055 Power State #0: 00:09:08.055 Max Power: 25.00 W 00:09:08.055 Non-Operational State: Operational 00:09:08.055 Entry Latency: 16 microseconds 00:09:08.055 Exit Latency: 4 microseconds 00:09:08.055 Relative Read Throughput: 0 00:09:08.055 Relative Read Latency: 0 00:09:08.055 Relative Write Throughput: 0 00:09:08.055 Relative Write Latency: 0 00:09:08.055 Idle Power[2024-11-28 06:34:18.594334] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:09.0] process 74875 terminated unexpected 00:09:08.055 : Not Reported 00:09:08.055 Active Power: Not Reported 00:09:08.055 Non-Operational Permissive Mode: Not Supported 00:09:08.055 00:09:08.055 Health Information 00:09:08.055 ================== 00:09:08.055 Critical Warnings: 00:09:08.055 Available Spare Space: OK 00:09:08.055 Temperature: OK 00:09:08.055 Device Reliability: OK 00:09:08.055 Read Only: No 00:09:08.055 Volatile Memory Backup: OK 00:09:08.055 Current Temperature: 323 Kelvin (50 Celsius) 00:09:08.055 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:08.055 Available Spare: 0% 00:09:08.055 Available Spare Threshold: 0% 00:09:08.055 Life Percentage Used: 0% 00:09:08.055 Data Units Read: 1164 00:09:08.055 Data Units Written: 539 00:09:08.055 Host Read Commands: 56769 00:09:08.055 Host Write Commands: 27878 00:09:08.055 Controller Busy Time: 0 minutes 00:09:08.055 Power Cycles: 0 00:09:08.055 Power On Hours: 0 hours 00:09:08.055 Unsafe Shutdowns: 0 00:09:08.055 Unrecoverable Media Errors: 0 00:09:08.055 Lifetime Error Log Entries: 0 00:09:08.055 Warning Temperature Time: 0 minutes 00:09:08.055 Critical Temperature Time: 0 minutes 00:09:08.055 00:09:08.055 Number of Queues 00:09:08.055 ================ 00:09:08.055 Number of I/O Submission Queues: 64 00:09:08.055 Number of I/O Completion Queues: 64 00:09:08.055 00:09:08.055 ZNS Specific Controller Data 00:09:08.055 ============================ 00:09:08.055 Zone Append Size Limit: 0 00:09:08.055 00:09:08.055 00:09:08.055 Active Namespaces 00:09:08.055 ================= 00:09:08.055 Namespace ID:1 00:09:08.055 Error Recovery Timeout: Unlimited 00:09:08.055 Command Set Identifier: NVM (00h) 00:09:08.055 Deallocate: Supported 00:09:08.055 Deallocated/Unwritten Error: Supported 00:09:08.055 Deallocated Read Value: All 0x00 00:09:08.055 Deallocate in Write Zeroes: Not Supported 00:09:08.055 Deallocated Guard Field: 0xFFFF 00:09:08.055 Flush: Supported 00:09:08.055 Reservation: Not Supported 00:09:08.055 Namespace Sharing Capabilities: Private 00:09:08.055 Size (in LBAs): 1310720 (5GiB) 00:09:08.055 Capacity (in LBAs): 1310720 (5GiB) 00:09:08.055 Utilization (in LBAs): 1310720 (5GiB) 00:09:08.055 Thin Provisioning: Not Supported 00:09:08.055 Per-NS Atomic Units: No 00:09:08.055 Maximum Single Source Range Length: 128 00:09:08.055 Maximum Copy Length: 128 00:09:08.055 Maximum Source Range Count: 128 00:09:08.055 NGUID/EUI64 Never Reused: No 00:09:08.055 Namespace Write Protected: No 00:09:08.055 Number of LBA Formats: 8 00:09:08.055 Current LBA Format: LBA Format #04 00:09:08.055 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:08.055 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:08.055 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:08.055 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:08.055 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:08.055 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:08.055 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:08.055 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:08.055 00:09:08.055 ===================================================== 00:09:08.055 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:08.055 ===================================================== 00:09:08.055 Controller Capabilities/Features 00:09:08.055 ================================ 00:09:08.055 Vendor ID: 1b36 00:09:08.055 Subsystem Vendor ID: 1af4 00:09:08.055 Serial Number: 12343 00:09:08.055 Model Number: QEMU NVMe Ctrl 00:09:08.055 Firmware Version: 8.0.0 00:09:08.055 Recommended Arb Burst: 6 00:09:08.055 IEEE OUI Identifier: 00 54 52 00:09:08.055 Multi-path I/O 00:09:08.055 May have multiple subsystem ports: No 00:09:08.055 May have multiple controllers: Yes 00:09:08.055 Associated with SR-IOV VF: No 00:09:08.055 Max Data Transfer Size: 524288 00:09:08.055 Max Number of Namespaces: 256 00:09:08.055 Max Number of I/O Queues: 64 00:09:08.055 NVMe Specification Version (VS): 1.4 00:09:08.055 NVMe Specification Version (Identify): 1.4 00:09:08.055 Maximum Queue Entries: 2048 00:09:08.055 Contiguous Queues Required: Yes 00:09:08.055 Arbitration Mechanisms Supported 00:09:08.055 Weighted Round Robin: Not Supported 00:09:08.055 Vendor Specific: Not Supported 00:09:08.055 Reset Timeout: 7500 ms 00:09:08.055 Doorbell Stride: 4 bytes 00:09:08.055 NVM Subsystem Reset: Not Supported 00:09:08.055 Command Sets Supported 00:09:08.055 NVM Command Set: Supported 00:09:08.055 Boot Partition: Not Supported 00:09:08.055 Memory Page Size Minimum: 4096 bytes 00:09:08.055 Memory Page Size Maximum: 65536 bytes 00:09:08.055 Persistent Memory Region: Not Supported 00:09:08.055 Optional Asynchronous Events Supported 00:09:08.055 Namespace Attribute Notices: Supported 00:09:08.055 Firmware Activation Notices: Not Supported 00:09:08.055 ANA Change Notices: Not Supported 00:09:08.055 PLE Aggregate Log Change Notices: Not Supported 00:09:08.055 LBA Status Info Alert Notices: Not Supported 00:09:08.055 EGE Aggregate Log Change Notices: Not Supported 00:09:08.055 Normal NVM Subsystem Shutdown event: Not Supported 00:09:08.055 Zone Descriptor Change Notices: Not Supported 00:09:08.055 Discovery Log Change Notices: Not Supported 00:09:08.055 Controller Attributes 00:09:08.055 128-bit Host Identifier: Not Supported 00:09:08.055 Non-Operational Permissive Mode: Not Supported 00:09:08.055 NVM Sets: Not Supported 00:09:08.055 Read Recovery Levels: Not Supported 00:09:08.055 Endurance Groups: Supported 00:09:08.055 Predictable Latency Mode: Not Supported 00:09:08.055 Traffic Based Keep ALive: Not Supported 00:09:08.055 Namespace Granularity: Not Supported 00:09:08.055 SQ Associations: Not Supported 00:09:08.055 UUID List: Not Supported 00:09:08.055 Multi-Domain Subsystem: Not Supported 00:09:08.055 Fixed Capacity Management: Not Supported 00:09:08.055 Variable Capacity Management: Not Supported 00:09:08.055 Delete Endurance Group: Not Supported 00:09:08.055 Delete NVM Set: Not Supported 00:09:08.055 Extended LBA Formats Supported: Supported 00:09:08.055 Flexible Data Placement Supported: Supported 00:09:08.055 00:09:08.055 Controller Memory Buffer Support 00:09:08.055 ================================ 00:09:08.055 Supported: No 00:09:08.055 00:09:08.055 Persistent Memory Region Support 00:09:08.055 ================================ 00:09:08.055 Supported: No 00:09:08.055 00:09:08.055 Admin Command Set Attributes 00:09:08.055 ============================ 00:09:08.055 Security Send/Receive: Not Supported 00:09:08.055 Format NVM: Supported 00:09:08.055 Firmware Activate/Download: Not Supported 00:09:08.055 Namespace Management: Supported 00:09:08.055 Device Self-Test: Not Supported 00:09:08.055 Directives: Supported 00:09:08.055 NVMe-MI: Not Supported 00:09:08.055 Virtualization Management: Not Supported 00:09:08.055 Doorbell Buffer Config: Supported 00:09:08.055 Get LBA Status Capability: Not Supported 00:09:08.055 Command & Feature Lockdown Capability: Not Supported 00:09:08.055 Abort Command Limit: 4 00:09:08.055 Async Event Request Limit: 4 00:09:08.055 Number of Firmware Slots: N/A 00:09:08.055 Firmware Slot 1 Read-Only: N/A 00:09:08.055 Firmware Activation Without Reset: N/A 00:09:08.055 Multiple Update Detection Support: N/A 00:09:08.055 Firmware Update Granularity: No Information Provided 00:09:08.055 Per-Namespace SMART Log: Yes 00:09:08.055 Asymmetric Namespace Access Log Page: Not Supported 00:09:08.055 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:08.055 Command Effects Log Page: Supported 00:09:08.055 Get Log Page Extended Data: Supported 00:09:08.055 Telemetry Log Pages: Not Supported 00:09:08.055 Persistent Event Log Pages: Not Supported 00:09:08.055 Supported Log Pages Log Page: May Support 00:09:08.055 Commands Supported & Effects Log Page: Not Supported 00:09:08.055 Feature Identifiers & Effects Log Page:May Support 00:09:08.055 NVMe-MI Commands & Effects Log Page: May Support 00:09:08.055 Data Area 4 for Telemetry Log: Not Supported 00:09:08.055 Error Log Page Entries Supported: 1 00:09:08.055 Keep Alive: Not Supported 00:09:08.055 00:09:08.055 NVM Command Set Attributes 00:09:08.055 ========================== 00:09:08.055 Submission Queue Entry Size 00:09:08.055 Max: 64 00:09:08.055 Min: 64 00:09:08.055 Completion Queue Entry Size 00:09:08.055 Max: 16 00:09:08.055 Min: 16 00:09:08.055 Number of Namespaces: 256 00:09:08.055 Compare Command: Supported 00:09:08.055 Write Uncorrectable Command: Not Supported 00:09:08.055 Dataset Management Command: Supported 00:09:08.055 Write Zeroes Command: Supported 00:09:08.055 Set Features Save Field: Supported 00:09:08.055 Reservations: Not Supported 00:09:08.055 Timestamp: Supported 00:09:08.055 Copy: Supported 00:09:08.055 Volatile Write Cache: Present 00:09:08.055 Atomic Write Unit (Normal): 1 00:09:08.055 Atomic Write Unit (PFail): 1 00:09:08.055 Atomic Compare & Write Unit: 1 00:09:08.055 Fused Compare & Write: Not Supported 00:09:08.055 Scatter-Gather List 00:09:08.055 SGL Command Set: Supported 00:09:08.055 SGL Keyed: Not Supported 00:09:08.055 SGL Bit Bucket Descriptor: Not Supported 00:09:08.055 SGL Metadata Pointer: Not Supported 00:09:08.055 Oversized SGL: Not Supported 00:09:08.055 SGL Metadata Address: Not Supported 00:09:08.055 SGL Offset: Not Supported 00:09:08.055 Transport SGL Data Block: Not Supported 00:09:08.055 Replay Protected Memory Block: Not Supported 00:09:08.055 00:09:08.055 Firmware Slot Information 00:09:08.055 ========================= 00:09:08.055 Active slot: 1 00:09:08.055 Slot 1 Firmware Revision: 1.0 00:09:08.055 00:09:08.055 00:09:08.055 Commands Supported and Effects 00:09:08.055 ============================== 00:09:08.055 Admin Commands 00:09:08.055 -------------- 00:09:08.055 Delete I/O Submission Queue (00h): Supported 00:09:08.055 Create I/O Submission Queue (01h): Supported 00:09:08.055 Get Log Page (02h): Supported 00:09:08.055 Delete I/O Completion Queue (04h): Supported 00:09:08.055 Create I/O Completion Queue (05h): Supported 00:09:08.055 Identify (06h): Supported 00:09:08.055 Abort (08h): Supported 00:09:08.055 Set Features (09h): Supported 00:09:08.055 Get Features (0Ah): Supported 00:09:08.055 Asynchronous Event Request (0Ch): Supported 00:09:08.055 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:08.055 Directive Send (19h): Supported 00:09:08.055 Directive Receive (1Ah): Supported 00:09:08.055 Virtualization Management (1Ch): Supported 00:09:08.055 Doorbell Buffer Config (7Ch): Supported 00:09:08.055 Format NVM (80h): Supported LBA-Change 00:09:08.055 I/O Commands 00:09:08.055 ------------ 00:09:08.055 Flush (00h): Supported LBA-Change 00:09:08.055 Write (01h): Supported LBA-Change 00:09:08.055 Read (02h): Supported 00:09:08.055 Compare (05h): Supported 00:09:08.055 Write Zeroes (08h): Supported LBA-Change 00:09:08.055 Dataset Management (09h): Supported LBA-Change 00:09:08.055 Unknown (0Ch): Supported 00:09:08.056 Unknown (12h): Supported 00:09:08.056 Copy (19h): Supported LBA-Change 00:09:08.056 Unknown (1Dh): Supported LBA-Change 00:09:08.056 00:09:08.056 Error Log 00:09:08.056 ========= 00:09:08.056 00:09:08.056 Arbitration 00:09:08.056 =========== 00:09:08.056 Arbitration Burst: no limit 00:09:08.056 00:09:08.056 Power Management 00:09:08.056 ================ 00:09:08.056 Number of Power States: 1 00:09:08.056 Current Power State: Power State #0 00:09:08.056 Power State #0: 00:09:08.056 Max Power: 25.00 W 00:09:08.056 Non-Operational State: Operational 00:09:08.056 Entry Latency: 16 microseconds 00:09:08.056 Exit Latency: 4 microseconds 00:09:08.056 Relative Read Throughput: 0 00:09:08.056 Relative Read Latency: 0 00:09:08.056 Relative Write Throughput: 0 00:09:08.056 Relative Write Latency: 0 00:09:08.056 Idle Power: Not Reported 00:09:08.056 Active Power: Not Reported 00:09:08.056 Non-Operational Permissive Mode: Not Supported 00:09:08.056 00:09:08.056 Health Information 00:09:08.056 ================== 00:09:08.056 Critical Warnings: 00:09:08.056 Available Spare Space: OK 00:09:08.056 Temperature: OK 00:09:08.056 Device Reliability: OK 00:09:08.056 Read Only: No 00:09:08.056 Volatile Memory Backup: OK 00:09:08.056 Current Temperature: 323 Kelvin (50 Celsius) 00:09:08.056 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:08.056 Available Spare: 0% 00:09:08.056 Available Spare Threshold: 0% 00:09:08.056 Life Percentage Used: 0% 00:09:08.056 Data Units Read: 1360 00:09:08.056 Data Units Written: 631 00:09:08.056 Host Read Commands: 58453 00:09:08.056 Host Write Commands: 28664 00:09:08.056 Controller Busy Time: 0 minutes 00:09:08.056 Power Cycles: 0 00:09:08.056 Power On Hours: 0 hours 00:09:08.056 Unsafe Shutdowns: 0 00:09:08.056 Unrecoverable Media Errors: 0 00:09:08.056 Lifetime Error Log Entries: 0 00:09:08.056 Warning Temperature Time: 0 minutes 00:09:08.056 Critical Temperature Time: 0 minutes 00:09:08.056 00:09:08.056 Number of Queues 00:09:08.056 ================ 00:09:08.056 Number of I/O Submission Queues: 64 00:09:08.056 Number of I/O Completion Queues: 64 00:09:08.056 00:09:08.056 ZNS Specific Controller Data 00:09:08.056 ============================ 00:09:08.056 Zone Append Size Limit: 0 00:09:08.056 00:09:08.056 00:09:08.056 Active Namespaces 00:09:08.056 ================= 00:09:08.056 Namespace ID:1 00:09:08.056 Error Recovery Timeout: Unlimited 00:09:08.056 Command Set Identifier: NVM (00h) 00:09:08.056 Deallocate: Supported 00:09:08.056 Deal[2024-11-28 06:34:18.596723] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 74875 terminated unexpected 00:09:08.056 located/Unwritten Error: Supported 00:09:08.056 Deallocated Read Value: All 0x00 00:09:08.056 Deallocate in Write Zeroes: Not Supported 00:09:08.056 Deallocated Guard Field: 0xFFFF 00:09:08.056 Flush: Supported 00:09:08.056 Reservation: Not Supported 00:09:08.056 Namespace Sharing Capabilities: Multiple Controllers 00:09:08.056 Size (in LBAs): 262144 (1GiB) 00:09:08.056 Capacity (in LBAs): 262144 (1GiB) 00:09:08.056 Utilization (in LBAs): 262144 (1GiB) 00:09:08.056 Thin Provisioning: Not Supported 00:09:08.056 Per-NS Atomic Units: No 00:09:08.056 Maximum Single Source Range Length: 128 00:09:08.056 Maximum Copy Length: 128 00:09:08.056 Maximum Source Range Count: 128 00:09:08.056 NGUID/EUI64 Never Reused: No 00:09:08.056 Namespace Write Protected: No 00:09:08.056 Endurance group ID: 1 00:09:08.056 Number of LBA Formats: 8 00:09:08.056 Current LBA Format: LBA Format #04 00:09:08.056 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:08.056 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:08.056 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:08.056 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:08.056 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:08.056 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:08.056 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:08.056 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:08.056 00:09:08.056 Get Feature FDP: 00:09:08.056 ================ 00:09:08.056 Enabled: Yes 00:09:08.056 FDP configuration index: 0 00:09:08.056 00:09:08.056 FDP configurations log page 00:09:08.056 =========================== 00:09:08.056 Number of FDP configurations: 1 00:09:08.056 Version: 0 00:09:08.056 Size: 112 00:09:08.056 FDP Configuration Descriptor: 0 00:09:08.056 Descriptor Size: 96 00:09:08.056 Reclaim Group Identifier format: 2 00:09:08.056 FDP Volatile Write Cache: Not Present 00:09:08.056 FDP Configuration: Valid 00:09:08.056 Vendor Specific Size: 0 00:09:08.056 Number of Reclaim Groups: 2 00:09:08.056 Number of Recalim Unit Handles: 8 00:09:08.056 Max Placement Identifiers: 128 00:09:08.056 Number of Namespaces Suppprted: 256 00:09:08.056 Reclaim unit Nominal Size: 6000000 bytes 00:09:08.056 Estimated Reclaim Unit Time Limit: Not Reported 00:09:08.056 RUH Desc #000: RUH Type: Initially Isolated 00:09:08.056 RUH Desc #001: RUH Type: Initially Isolated 00:09:08.056 RUH Desc #002: RUH Type: Initially Isolated 00:09:08.056 RUH Desc #003: RUH Type: Initially Isolated 00:09:08.056 RUH Desc #004: RUH Type: Initially Isolated 00:09:08.056 RUH Desc #005: RUH Type: Initially Isolated 00:09:08.056 RUH Desc #006: RUH Type: Initially Isolated 00:09:08.056 RUH Desc #007: RUH Type: Initially Isolated 00:09:08.056 00:09:08.056 FDP reclaim unit handle usage log page 00:09:08.056 ====================================== 00:09:08.056 Number of Reclaim Unit Handles: 8 00:09:08.056 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:08.056 RUH Usage Desc #001: RUH Attributes: Unused 00:09:08.056 RUH Usage Desc #002: RUH Attributes: Unused 00:09:08.056 RUH Usage Desc #003: RUH Attributes: Unused 00:09:08.056 RUH Usage Desc #004: RUH Attributes: Unused 00:09:08.056 RUH Usage Desc #005: RUH Attributes: Unused 00:09:08.056 RUH Usage Desc #006: RUH Attributes: Unused 00:09:08.056 RUH Usage Desc #007: RUH Attributes: Unused 00:09:08.056 00:09:08.056 FDP statistics log page 00:09:08.056 ======================= 00:09:08.056 Host bytes with metadata written: 411975680 00:09:08.056 Media bytes with metadata written: 412065792 00:09:08.056 Media bytes erased: 0 00:09:08.056 00:09:08.056 FDP events log page 00:09:08.056 =================== 00:09:08.056 Number of FDP events: 0 00:09:08.056 00:09:08.056 ===================================================== 00:09:08.056 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:08.056 ===================================================== 00:09:08.056 Controller Capabilities/Features 00:09:08.056 ================================ 00:09:08.056 Vendor ID: 1b36 00:09:08.056 Subsystem Vendor ID: 1af4 00:09:08.056 Serial Number: 12340 00:09:08.056 Model Number: QEMU NVMe Ctrl 00:09:08.056 Firmware Version: 8.0.0 00:09:08.056 Recommended Arb Burst: 6 00:09:08.056 IEEE OUI Identifier: 00 54 52 00:09:08.056 Multi-path I/O 00:09:08.056 May have multiple subsystem ports: No 00:09:08.056 May have multiple controllers: No 00:09:08.056 Associated with SR-IOV VF: No 00:09:08.056 Max Data Transfer Size: 524288 00:09:08.056 Max Number of Namespaces: 256 00:09:08.056 Max Number of I/O Queues: 64 00:09:08.056 NVMe Specification Version (VS): 1.4 00:09:08.056 NVMe Specification Version (Identify): 1.4 00:09:08.056 Maximum Queue Entries: 2048 00:09:08.056 Contiguous Queues Required: Yes 00:09:08.056 Arbitration Mechanisms Supported 00:09:08.056 Weighted Round Robin: Not Supported 00:09:08.056 Vendor Specific: Not Supported 00:09:08.056 Reset Timeout: 7500 ms 00:09:08.056 Doorbell Stride: 4 bytes 00:09:08.056 NVM Subsystem Reset: Not Supported 00:09:08.056 Command Sets Supported 00:09:08.056 NVM Command Set: Supported 00:09:08.056 Boot Partition: Not Supported 00:09:08.056 Memory Page Size Minimum: 4096 bytes 00:09:08.056 Memory Page Size Maximum: 65536 bytes 00:09:08.056 Persistent Memory Region: Not Supported 00:09:08.056 Optional Asynchronous Events Supported 00:09:08.056 Namespace Attribute Notices: Supported 00:09:08.056 Firmware Activation Notices: Not Supported 00:09:08.056 ANA Change Notices: Not Supported 00:09:08.056 PLE Aggregate Log Change Notices: Not Supported 00:09:08.056 LBA Status Info Alert Notices: Not Supported 00:09:08.056 EGE Aggregate Log Change Notices: Not Supported 00:09:08.056 Normal NVM Subsystem Shutdown event: Not Supported 00:09:08.056 Zone Descriptor Change Notices: Not Supported 00:09:08.056 Discovery Log Change Notices: Not Supported 00:09:08.056 Controller Attributes 00:09:08.056 128-bit Host Identifier: Not Supported 00:09:08.056 Non-Operational Permissive Mode: Not Supported 00:09:08.056 NVM Sets: Not Supported 00:09:08.056 Read Recovery Levels: Not Supported 00:09:08.056 Endurance Groups: Not Supported 00:09:08.056 Predictable Latency Mode: Not Supported 00:09:08.056 Traffic Based Keep ALive: Not Supported 00:09:08.056 Namespace Granularity: Not Supported 00:09:08.056 SQ Associations: Not Supported 00:09:08.056 UUID List: Not Supported 00:09:08.056 Multi-Domain Subsystem: Not Supported 00:09:08.056 Fixed Capacity Management: Not Supported 00:09:08.056 Variable Capacity Management: Not Supported 00:09:08.056 Delete Endurance Group: Not Supported 00:09:08.056 Delete NVM Set: Not Supported 00:09:08.056 Extended LBA Formats Supported: Supported 00:09:08.056 Flexible Data Placement Supported: Not Supported 00:09:08.056 00:09:08.056 Controller Memory Buffer Support 00:09:08.056 ================================ 00:09:08.056 Supported: No 00:09:08.056 00:09:08.056 Persistent Memory Region Support 00:09:08.056 ================================ 00:09:08.056 Supported: No 00:09:08.056 00:09:08.056 Admin Command Set Attributes 00:09:08.056 ============================ 00:09:08.056 Security Send/Receive: Not Supported 00:09:08.056 Format NVM: Supported 00:09:08.056 Firmware Activate/Download: Not Supported 00:09:08.056 Namespace Management: Supported 00:09:08.056 Device Self-Test: Not Supported 00:09:08.056 Directives: Supported 00:09:08.056 NVMe-MI: Not Supported 00:09:08.056 Virtualization Management: Not Supported 00:09:08.056 Doorbell Buffer Config: Supported 00:09:08.056 Get LBA Status Capability: Not Supported 00:09:08.056 Command & Feature Lockdown Capability: Not Supported 00:09:08.056 Abort Command Limit: 4 00:09:08.056 Async Event Request Limit: 4 00:09:08.056 Number of Firmware Slots: N/A 00:09:08.056 Firmware Slot 1 Read-Only: N/A 00:09:08.056 Firmware Activation Without Reset: N/A 00:09:08.056 Multiple Update Detection Support: N/A 00:09:08.056 Firmware Update Granularity: No Information Provided 00:09:08.056 Per-Namespace SMART Log: Yes 00:09:08.056 Asymmetric Namespace Access Log Page: Not Supported 00:09:08.056 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:08.056 Command Effects Log Page: Supported 00:09:08.056 Get Log Page Extended Data: Supported 00:09:08.056 Telemetry Log Pages: Not Supported 00:09:08.056 Persistent Event Log Pages: Not Supported 00:09:08.056 Supported Log Pages Log Page: May Support 00:09:08.056 Commands Supported & Effects Log Page: Not Supported 00:09:08.056 Feature Identifiers & Effects Log Page:May Support 00:09:08.056 NVMe-MI Commands & Effects Log Page: May Support 00:09:08.056 Data Area 4 for Telemetry Log: Not Supported 00:09:08.056 Error Log Page Entries Supported: 1 00:09:08.056 Keep Alive: Not Supported 00:09:08.056 00:09:08.056 NVM Command Set Attributes 00:09:08.056 ========================== 00:09:08.056 Submission Queue Entry Size 00:09:08.056 Max: 64 00:09:08.056 Min: 64 00:09:08.056 Completion Queue Entry Size 00:09:08.056 Max: 16 00:09:08.056 Min: 16 00:09:08.056 Number of Namespaces: 256 00:09:08.056 Compare Command: Supported 00:09:08.056 Write Uncorrectable Command: Not Supported 00:09:08.056 Dataset Management Command: Supported 00:09:08.056 Write Zeroes Command: Supported 00:09:08.056 Set Features Save Field: Supported 00:09:08.056 Reservations: Not Supported 00:09:08.056 Timestamp: Supported 00:09:08.056 Copy: Supported 00:09:08.056 Volatile Write Cache: Present 00:09:08.056 Atomic Write Unit (Normal): 1 00:09:08.056 Atomic Write Unit (PFail): 1 00:09:08.056 Atomic Compare & Write Unit: 1 00:09:08.056 Fused Compare & Write: Not Supported 00:09:08.056 Scatter-Gather List 00:09:08.056 SGL Command Set: Supported 00:09:08.056 SGL Keyed: Not Supported 00:09:08.056 SGL Bit Bucket Descriptor: Not Supported 00:09:08.056 SGL Metadata Pointer: Not Supported 00:09:08.056 Oversized SGL: Not Supported 00:09:08.057 SGL Metadata Address: Not Supported 00:09:08.057 SGL Offset: Not Supported 00:09:08.057 Transport SGL Data Block: Not Supported 00:09:08.057 Replay Protected Memory Block: Not Supported 00:09:08.057 00:09:08.057 Firmware Slot Information 00:09:08.057 ========================= 00:09:08.057 Active slot: 1 00:09:08.057 Slot 1 Firmware Revision: 1.0 00:09:08.057 00:09:08.057 00:09:08.057 Commands Supported and Effects 00:09:08.057 ============================== 00:09:08.057 Admin Commands 00:09:08.057 -------------- 00:09:08.057 Delete I/O Submission Queue (00h): Supported 00:09:08.057 Create I/O Submission Queue (01h): Supported 00:09:08.057 Get Log Page (02h): Supported 00:09:08.057 Delete I/O Completion Queue (04h): Supported 00:09:08.057 Create I/O Completion Queue (05h): Supported 00:09:08.057 Identify (06h): Supported 00:09:08.057 Abort (08h): Supported 00:09:08.057 Set Features (09h): Supported 00:09:08.057 Get Features (0Ah): Supported 00:09:08.057 Asynchronous Event Request (0Ch): Supported 00:09:08.057 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:08.057 Directive Send (19h): Supported 00:09:08.057 Directive Receive (1Ah): Supported 00:09:08.057 Virtualization Management (1Ch): Supported 00:09:08.057 Doorbell Buffer Config (7Ch): Supported 00:09:08.057 Format NVM (80h): Supported LBA-Change 00:09:08.057 I/O Commands 00:09:08.057 ------------ 00:09:08.057 Flush (00h): Supported LBA-Change 00:09:08.057 Write (01h): Supported LBA-Change 00:09:08.057 Read (02h): Supported 00:09:08.057 Compare (05h): Supported 00:09:08.057 Write Zeroes (08h): Supported LBA-Change 00:09:08.057 Dataset Management (09h): Supported LBA-Change 00:09:08.057 Unknown (0Ch): Supported 00:09:08.057 Unknown (12h): Supported 00:09:08.057 Copy (19h): Supported LBA-Change 00:09:08.057 Unknown (1Dh): Supported LBA-Change 00:09:08.057 00:09:08.057 Error Log 00:09:08.057 ========= 00:09:08.057 00:09:08.057 Arbitration 00:09:08.057 =========== 00:09:08.057 Arbitration Burst: no limit 00:09:08.057 00:09:08.057 Power Management 00:09:08.057 ================ 00:09:08.057 Number of Power States: 1 00:09:08.057 Current Power State: Power State #0 00:09:08.057 Power State #0: 00:09:08.057 Max Power: 25.00 W 00:09:08.057 Non-Operational State: Operational 00:09:08.057 Entry Latency: 16 microseconds 00:09:08.057 Exit Latency: 4 microseconds 00:09:08.057 Relative Read Throughput: 0 00:09:08.057 Relative Read Latency: 0 00:09:08.057 Relative Write Throughput: 0 00:09:08.057 Relative Write Latency: 0 00:09:08.057 Idle Power: Not Reported 00:09:08.057 Active Power: Not Reported 00:09:08.057 Non-Operational Permissive Mode: Not Supported 00:09:08.057 00:09:08.057 Health Information 00:09:08.057 ================== 00:09:08.057 Critical Warnings: 00:09:08.057 Available Spare Space: OK 00:09:08.057 Temperature: OK 00:09:08.057 Device Reliability: OK 00:09:08.057 Read Only: No 00:09:08.057 Volatile Memory Backup: OK 00:09:08.057 Current Temperature: 323 Kelvin (50 Celsius) 00:09:08.057 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:08.057 Available Spare: 0% 00:09:08.057 Available Spare Threshold: 0% 00:09:08.057 Life Percentage Used: 0% 00:09:08.057 Data Units Read: 1761 00:09:08.057 Data Units Written: 812 00:09:08.057 Host Read Commands: 85733 00:09:08.057 Host Write Commands: [2024-11-28 06:34:18.597802] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:08.0] process 74875 terminated unexpected 00:09:08.057 42544 00:09:08.057 Controller Busy Time: 0 minutes 00:09:08.057 Power Cycles: 0 00:09:08.057 Power On Hours: 0 hours 00:09:08.057 Unsafe Shutdowns: 0 00:09:08.057 Unrecoverable Media Errors: 0 00:09:08.057 Lifetime Error Log Entries: 0 00:09:08.057 Warning Temperature Time: 0 minutes 00:09:08.057 Critical Temperature Time: 0 minutes 00:09:08.057 00:09:08.057 Number of Queues 00:09:08.057 ================ 00:09:08.057 Number of I/O Submission Queues: 64 00:09:08.057 Number of I/O Completion Queues: 64 00:09:08.057 00:09:08.057 ZNS Specific Controller Data 00:09:08.057 ============================ 00:09:08.057 Zone Append Size Limit: 0 00:09:08.057 00:09:08.057 00:09:08.057 Active Namespaces 00:09:08.057 ================= 00:09:08.057 Namespace ID:1 00:09:08.057 Error Recovery Timeout: Unlimited 00:09:08.057 Command Set Identifier: NVM (00h) 00:09:08.057 Deallocate: Supported 00:09:08.057 Deallocated/Unwritten Error: Supported 00:09:08.057 Deallocated Read Value: All 0x00 00:09:08.057 Deallocate in Write Zeroes: Not Supported 00:09:08.057 Deallocated Guard Field: 0xFFFF 00:09:08.057 Flush: Supported 00:09:08.057 Reservation: Not Supported 00:09:08.057 Metadata Transferred as: Separate Metadata Buffer 00:09:08.057 Namespace Sharing Capabilities: Private 00:09:08.057 Size (in LBAs): 1548666 (5GiB) 00:09:08.057 Capacity (in LBAs): 1548666 (5GiB) 00:09:08.057 Utilization (in LBAs): 1548666 (5GiB) 00:09:08.057 Thin Provisioning: Not Supported 00:09:08.057 Per-NS Atomic Units: No 00:09:08.057 Maximum Single Source Range Length: 128 00:09:08.057 Maximum Copy Length: 128 00:09:08.057 Maximum Source Range Count: 128 00:09:08.057 NGUID/EUI64 Never Reused: No 00:09:08.057 Namespace Write Protected: No 00:09:08.057 Number of LBA Formats: 8 00:09:08.057 Current LBA Format: LBA Format #07 00:09:08.057 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:08.057 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:08.057 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:08.057 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:08.057 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:08.057 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:08.057 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:08.057 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:08.057 00:09:08.057 ===================================================== 00:09:08.057 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:08.057 ===================================================== 00:09:08.057 Controller Capabilities/Features 00:09:08.057 ================================ 00:09:08.057 Vendor ID: 1b36 00:09:08.057 Subsystem Vendor ID: 1af4 00:09:08.057 Serial Number: 12342 00:09:08.057 Model Number: QEMU NVMe Ctrl 00:09:08.057 Firmware Version: 8.0.0 00:09:08.057 Recommended Arb Burst: 6 00:09:08.057 IEEE OUI Identifier: 00 54 52 00:09:08.057 Multi-path I/O 00:09:08.057 May have multiple subsystem ports: No 00:09:08.057 May have multiple controllers: No 00:09:08.057 Associated with SR-IOV VF: No 00:09:08.057 Max Data Transfer Size: 524288 00:09:08.057 Max Number of Namespaces: 256 00:09:08.057 Max Number of I/O Queues: 64 00:09:08.057 NVMe Specification Version (VS): 1.4 00:09:08.057 NVMe Specification Version (Identify): 1.4 00:09:08.057 Maximum Queue Entries: 2048 00:09:08.057 Contiguous Queues Required: Yes 00:09:08.057 Arbitration Mechanisms Supported 00:09:08.057 Weighted Round Robin: Not Supported 00:09:08.057 Vendor Specific: Not Supported 00:09:08.057 Reset Timeout: 7500 ms 00:09:08.057 Doorbell Stride: 4 bytes 00:09:08.057 NVM Subsystem Reset: Not Supported 00:09:08.057 Command Sets Supported 00:09:08.057 NVM Command Set: Supported 00:09:08.057 Boot Partition: Not Supported 00:09:08.057 Memory Page Size Minimum: 4096 bytes 00:09:08.057 Memory Page Size Maximum: 65536 bytes 00:09:08.057 Persistent Memory Region: Not Supported 00:09:08.057 Optional Asynchronous Events Supported 00:09:08.057 Namespace Attribute Notices: Supported 00:09:08.057 Firmware Activation Notices: Not Supported 00:09:08.057 ANA Change Notices: Not Supported 00:09:08.057 PLE Aggregate Log Change Notices: Not Supported 00:09:08.057 LBA Status Info Alert Notices: Not Supported 00:09:08.057 EGE Aggregate Log Change Notices: Not Supported 00:09:08.057 Normal NVM Subsystem Shutdown event: Not Supported 00:09:08.057 Zone Descriptor Change Notices: Not Supported 00:09:08.057 Discovery Log Change Notices: Not Supported 00:09:08.057 Controller Attributes 00:09:08.057 128-bit Host Identifier: Not Supported 00:09:08.057 Non-Operational Permissive Mode: Not Supported 00:09:08.057 NVM Sets: Not Supported 00:09:08.057 Read Recovery Levels: Not Supported 00:09:08.057 Endurance Groups: Not Supported 00:09:08.057 Predictable Latency Mode: Not Supported 00:09:08.057 Traffic Based Keep ALive: Not Supported 00:09:08.057 Namespace Granularity: Not Supported 00:09:08.057 SQ Associations: Not Supported 00:09:08.057 UUID List: Not Supported 00:09:08.057 Multi-Domain Subsystem: Not Supported 00:09:08.057 Fixed Capacity Management: Not Supported 00:09:08.057 Variable Capacity Management: Not Supported 00:09:08.057 Delete Endurance Group: Not Supported 00:09:08.057 Delete NVM Set: Not Supported 00:09:08.057 Extended LBA Formats Supported: Supported 00:09:08.057 Flexible Data Placement Supported: Not Supported 00:09:08.057 00:09:08.057 Controller Memory Buffer Support 00:09:08.057 ================================ 00:09:08.057 Supported: No 00:09:08.057 00:09:08.057 Persistent Memory Region Support 00:09:08.057 ================================ 00:09:08.057 Supported: No 00:09:08.057 00:09:08.057 Admin Command Set Attributes 00:09:08.057 ============================ 00:09:08.057 Security Send/Receive: Not Supported 00:09:08.057 Format NVM: Supported 00:09:08.057 Firmware Activate/Download: Not Supported 00:09:08.057 Namespace Management: Supported 00:09:08.057 Device Self-Test: Not Supported 00:09:08.057 Directives: Supported 00:09:08.057 NVMe-MI: Not Supported 00:09:08.057 Virtualization Management: Not Supported 00:09:08.057 Doorbell Buffer Config: Supported 00:09:08.057 Get LBA Status Capability: Not Supported 00:09:08.057 Command & Feature Lockdown Capability: Not Supported 00:09:08.057 Abort Command Limit: 4 00:09:08.057 Async Event Request Limit: 4 00:09:08.057 Number of Firmware Slots: N/A 00:09:08.057 Firmware Slot 1 Read-Only: N/A 00:09:08.057 Firmware Activation Without Reset: N/A 00:09:08.057 Multiple Update Detection Support: N/A 00:09:08.057 Firmware Update Granularity: No Information Provided 00:09:08.057 Per-Namespace SMART Log: Yes 00:09:08.057 Asymmetric Namespace Access Log Page: Not Supported 00:09:08.057 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:08.057 Command Effects Log Page: Supported 00:09:08.057 Get Log Page Extended Data: Supported 00:09:08.057 Telemetry Log Pages: Not Supported 00:09:08.057 Persistent Event Log Pages: Not Supported 00:09:08.057 Supported Log Pages Log Page: May Support 00:09:08.057 Commands Supported & Effects Log Page: Not Supported 00:09:08.057 Feature Identifiers & Effects Log Page:May Support 00:09:08.057 NVMe-MI Commands & Effects Log Page: May Support 00:09:08.057 Data Area 4 for Telemetry Log: Not Supported 00:09:08.058 Error Log Page Entries Supported: 1 00:09:08.058 Keep Alive: Not Supported 00:09:08.058 00:09:08.058 NVM Command Set Attributes 00:09:08.058 ========================== 00:09:08.058 Submission Queue Entry Size 00:09:08.058 Max: 64 00:09:08.058 Min: 64 00:09:08.058 Completion Queue Entry Size 00:09:08.058 Max: 16 00:09:08.058 Min: 16 00:09:08.058 Number of Namespaces: 256 00:09:08.058 Compare Command: Supported 00:09:08.058 Write Uncorrectable Command: Not Supported 00:09:08.058 Dataset Management Command: Supported 00:09:08.058 Write Zeroes Command: Supported 00:09:08.058 Set Features Save Field: Supported 00:09:08.058 Reservations: Not Supported 00:09:08.058 Timestamp: Supported 00:09:08.058 Copy: Supported 00:09:08.058 Volatile Write Cache: Present 00:09:08.058 Atomic Write Unit (Normal): 1 00:09:08.058 Atomic Write Unit (PFail): 1 00:09:08.058 Atomic Compare & Write Unit: 1 00:09:08.058 Fused Compare & Write: Not Supported 00:09:08.058 Scatter-Gather List 00:09:08.058 SGL Command Set: Supported 00:09:08.058 SGL Keyed: Not Supported 00:09:08.058 SGL Bit Bucket Descriptor: Not Supported 00:09:08.058 SGL Metadata Pointer: Not Supported 00:09:08.058 Oversized SGL: Not Supported 00:09:08.058 SGL Metadata Address: Not Supported 00:09:08.058 SGL Offset: Not Supported 00:09:08.058 Transport SGL Data Block: Not Supported 00:09:08.058 Replay Protected Memory Block: Not Supported 00:09:08.058 00:09:08.058 Firmware Slot Information 00:09:08.058 ========================= 00:09:08.058 Active slot: 1 00:09:08.058 Slot 1 Firmware Revision: 1.0 00:09:08.058 00:09:08.058 00:09:08.058 Commands Supported and Effects 00:09:08.058 ============================== 00:09:08.058 Admin Commands 00:09:08.058 -------------- 00:09:08.058 Delete I/O Submission Queue (00h): Supported 00:09:08.058 Create I/O Submission Queue (01h): Supported 00:09:08.058 Get Log Page (02h): Supported 00:09:08.058 Delete I/O Completion Queue (04h): Supported 00:09:08.058 Create I/O Completion Queue (05h): Supported 00:09:08.058 Identify (06h): Supported 00:09:08.058 Abort (08h): Supported 00:09:08.058 Set Features (09h): Supported 00:09:08.058 Get Features (0Ah): Supported 00:09:08.058 Asynchronous Event Request (0Ch): Supported 00:09:08.058 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:08.058 Directive Send (19h): Supported 00:09:08.058 Directive Receive (1Ah): Supported 00:09:08.058 Virtualization Management (1Ch): Supported 00:09:08.058 Doorbell Buffer Config (7Ch): Supported 00:09:08.058 Format NVM (80h): Supported LBA-Change 00:09:08.058 I/O Commands 00:09:08.058 ------------ 00:09:08.058 Flush (00h): Supported LBA-Change 00:09:08.058 Write (01h): Supported LBA-Change 00:09:08.058 Read (02h): Supported 00:09:08.058 Compare (05h): Supported 00:09:08.058 Write Zeroes (08h): Supported LBA-Change 00:09:08.058 Dataset Management (09h): Supported LBA-Change 00:09:08.058 Unknown (0Ch): Supported 00:09:08.058 Unknown (12h): Supported 00:09:08.058 Copy (19h): Supported LBA-Change 00:09:08.058 Unknown (1Dh): Supported LBA-Change 00:09:08.058 00:09:08.058 Error Log 00:09:08.058 ========= 00:09:08.058 00:09:08.058 Arbitration 00:09:08.058 =========== 00:09:08.058 Arbitration Burst: no limit 00:09:08.058 00:09:08.058 Power Management 00:09:08.058 ================ 00:09:08.058 Number of Power States: 1 00:09:08.058 Current Power State: Power State #0 00:09:08.058 Power State #0: 00:09:08.058 Max Power: 25.00 W 00:09:08.058 Non-Operational State: Operational 00:09:08.058 Entry Latency: 16 microseconds 00:09:08.058 Exit Latency: 4 microseconds 00:09:08.058 Relative Read Throughput: 0 00:09:08.058 Relative Read Latency: 0 00:09:08.058 Relative Write Throughput: 0 00:09:08.058 Relative Write Latency: 0 00:09:08.058 Idle Power: Not Reported 00:09:08.058 Active Power: Not Reported 00:09:08.058 Non-Operational Permissive Mode: Not Supported 00:09:08.058 00:09:08.058 Health Information 00:09:08.058 ================== 00:09:08.058 Critical Warnings: 00:09:08.058 Available Spare Space: OK 00:09:08.058 Temperature: OK 00:09:08.058 Device Reliability: OK 00:09:08.058 Read Only: No 00:09:08.058 Volatile Memory Backup: OK 00:09:08.058 Current Temperature: 323 Kelvin (50 Celsius) 00:09:08.058 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:08.058 Available Spare: 0% 00:09:08.058 Available Spare Threshold: 0% 00:09:08.058 Life Percentage Used: 0% 00:09:08.058 Data Units Read: 3620 00:09:08.058 Data Units Written: 1671 00:09:08.058 Host Read Commands: 171874 00:09:08.058 Host Write Commands: 84211 00:09:08.058 Controller Busy Time: 0 minutes 00:09:08.058 Power Cycles: 0 00:09:08.058 Power On Hours: 0 hours 00:09:08.058 Unsafe Shutdowns: 0 00:09:08.058 Unrecoverable Media Errors: 0 00:09:08.058 Lifetime Error Log Entries: 0 00:09:08.058 Warning Temperature Time: 0 minutes 00:09:08.058 Critical Temperature Time: 0 minutes 00:09:08.058 00:09:08.058 Number of Queues 00:09:08.058 ================ 00:09:08.058 Number of I/O Submission Queues: 64 00:09:08.058 Number of I/O Completion Queues: 64 00:09:08.058 00:09:08.058 ZNS Specific Controller Data 00:09:08.058 ============================ 00:09:08.058 Zone Append Size Limit: 0 00:09:08.058 00:09:08.058 00:09:08.058 Active Namespaces 00:09:08.058 ================= 00:09:08.058 Namespace ID:1 00:09:08.058 Error Recovery Timeout: Unlimited 00:09:08.058 Command Set Identifier: NVM (00h) 00:09:08.058 Deallocate: Supported 00:09:08.058 Deallocated/Unwritten Error: Supported 00:09:08.058 Deallocated Read Value: All 0x00 00:09:08.058 Deallocate in Write Zeroes: Not Supported 00:09:08.058 Deallocated Guard Field: 0xFFFF 00:09:08.058 Flush: Supported 00:09:08.058 Reservation: Not Supported 00:09:08.058 Namespace Sharing Capabilities: Private 00:09:08.058 Size (in LBAs): 1048576 (4GiB) 00:09:08.058 Capacity (in LBAs): 1048576 (4GiB) 00:09:08.058 Utilization (in LBAs): 1048576 (4GiB) 00:09:08.058 Thin Provisioning: Not Supported 00:09:08.058 Per-NS Atomic Units: No 00:09:08.058 Maximum Single Source Range Length: 128 00:09:08.058 Maximum Copy Length: 128 00:09:08.058 Maximum Source Range Count: 128 00:09:08.058 NGUID/EUI64 Never Reused: No 00:09:08.058 Namespace Write Protected: No 00:09:08.058 Number of LBA Formats: 8 00:09:08.058 Current LBA Format: LBA Format #04 00:09:08.058 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:08.058 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:08.058 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:08.058 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:08.058 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:08.058 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:08.058 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:08.058 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:08.058 00:09:08.058 Namespace ID:2 00:09:08.058 Error Recovery Timeout: Unlimited 00:09:08.058 Command Set Identifier: NVM (00h) 00:09:08.058 Deallocate: Supported 00:09:08.058 Deallocated/Unwritten Error: Supported 00:09:08.058 Deallocated Read Value: All 0x00 00:09:08.058 Deallocate in Write Zeroes: Not Supported 00:09:08.058 Deallocated Guard Field: 0xFFFF 00:09:08.058 Flush: Supported 00:09:08.058 Reservation: Not Supported 00:09:08.058 Namespace Sharing Capabilities: Private 00:09:08.058 Size (in LBAs): 1048576 (4GiB) 00:09:08.058 Capacity (in LBAs): 1048576 (4GiB) 00:09:08.058 Utilization (in LBAs): 1048576 (4GiB) 00:09:08.058 Thin Provisioning: Not Supported 00:09:08.058 Per-NS Atomic Units: No 00:09:08.058 Maximum Single Source Range Length: 128 00:09:08.058 Maximum Copy Length: 128 00:09:08.058 Maximum Source Range Count: 128 00:09:08.058 NGUID/EUI64 Never Reused: No 00:09:08.058 Namespace Write Protected: No 00:09:08.058 Number of LBA Formats: 8 00:09:08.058 Current LBA Format: LBA Format #04 00:09:08.058 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:08.058 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:08.058 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:08.058 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:08.058 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:08.058 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:08.058 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:08.058 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:08.058 00:09:08.058 Namespace ID:3 00:09:08.058 Error Recovery Timeout: Unlimited 00:09:08.058 Command Set Identifier: NVM (00h) 00:09:08.058 Deallocate: Supported 00:09:08.058 Deallocated/Unwritten Error: Supported 00:09:08.058 Deallocated Read Value: All 0x00 00:09:08.058 Deallocate in Write Zeroes: Not Supported 00:09:08.058 Deallocated Guard Field: 0xFFFF 00:09:08.058 Flush: Supported 00:09:08.058 Reservation: Not Supported 00:09:08.058 Namespace Sharing Capabilities: Private 00:09:08.058 Size (in LBAs): 1048576 (4GiB) 00:09:08.058 Capacity (in LBAs): 1048576 (4GiB) 00:09:08.058 Utilization (in LBAs): 1048576 (4GiB) 00:09:08.058 Thin Provisioning: Not Supported 00:09:08.058 Per-NS Atomic Units: No 00:09:08.058 Maximum Single Source Range Length: 128 00:09:08.058 Maximum Copy Length: 128 00:09:08.058 Maximum Source Range Count: 128 00:09:08.058 NGUID/EUI64 Never Reused: No 00:09:08.058 Namespace Write Protected: No 00:09:08.058 Number of LBA Formats: 8 00:09:08.058 Current LBA Format: LBA Format #04 00:09:08.058 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:08.058 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:08.058 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:08.058 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:08.058 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:08.058 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:08.058 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:08.058 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:08.058 00:09:08.058 06:34:18 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:08.058 06:34:18 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:09:08.058 ===================================================== 00:09:08.058 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:08.058 ===================================================== 00:09:08.058 Controller Capabilities/Features 00:09:08.058 ================================ 00:09:08.058 Vendor ID: 1b36 00:09:08.058 Subsystem Vendor ID: 1af4 00:09:08.058 Serial Number: 12340 00:09:08.058 Model Number: QEMU NVMe Ctrl 00:09:08.058 Firmware Version: 8.0.0 00:09:08.058 Recommended Arb Burst: 6 00:09:08.058 IEEE OUI Identifier: 00 54 52 00:09:08.058 Multi-path I/O 00:09:08.058 May have multiple subsystem ports: No 00:09:08.058 May have multiple controllers: No 00:09:08.058 Associated with SR-IOV VF: No 00:09:08.058 Max Data Transfer Size: 524288 00:09:08.058 Max Number of Namespaces: 256 00:09:08.058 Max Number of I/O Queues: 64 00:09:08.058 NVMe Specification Version (VS): 1.4 00:09:08.058 NVMe Specification Version (Identify): 1.4 00:09:08.058 Maximum Queue Entries: 2048 00:09:08.058 Contiguous Queues Required: Yes 00:09:08.058 Arbitration Mechanisms Supported 00:09:08.058 Weighted Round Robin: Not Supported 00:09:08.058 Vendor Specific: Not Supported 00:09:08.058 Reset Timeout: 7500 ms 00:09:08.058 Doorbell Stride: 4 bytes 00:09:08.058 NVM Subsystem Reset: Not Supported 00:09:08.058 Command Sets Supported 00:09:08.058 NVM Command Set: Supported 00:09:08.058 Boot Partition: Not Supported 00:09:08.058 Memory Page Size Minimum: 4096 bytes 00:09:08.058 Memory Page Size Maximum: 65536 bytes 00:09:08.058 Persistent Memory Region: Not Supported 00:09:08.058 Optional Asynchronous Events Supported 00:09:08.058 Namespace Attribute Notices: Supported 00:09:08.058 Firmware Activation Notices: Not Supported 00:09:08.058 ANA Change Notices: Not Supported 00:09:08.058 PLE Aggregate Log Change Notices: Not Supported 00:09:08.058 LBA Status Info Alert Notices: Not Supported 00:09:08.058 EGE Aggregate Log Change Notices: Not Supported 00:09:08.058 Normal NVM Subsystem Shutdown event: Not Supported 00:09:08.058 Zone Descriptor Change Notices: Not Supported 00:09:08.058 Discovery Log Change Notices: Not Supported 00:09:08.058 Controller Attributes 00:09:08.058 128-bit Host Identifier: Not Supported 00:09:08.058 Non-Operational Permissive Mode: Not Supported 00:09:08.058 NVM Sets: Not Supported 00:09:08.058 Read Recovery Levels: Not Supported 00:09:08.058 Endurance Groups: Not Supported 00:09:08.058 Predictable Latency Mode: Not Supported 00:09:08.059 Traffic Based Keep ALive: Not Supported 00:09:08.059 Namespace Granularity: Not Supported 00:09:08.059 SQ Associations: Not Supported 00:09:08.059 UUID List: Not Supported 00:09:08.059 Multi-Domain Subsystem: Not Supported 00:09:08.059 Fixed Capacity Management: Not Supported 00:09:08.059 Variable Capacity Management: Not Supported 00:09:08.059 Delete Endurance Group: Not Supported 00:09:08.059 Delete NVM Set: Not Supported 00:09:08.059 Extended LBA Formats Supported: Supported 00:09:08.059 Flexible Data Placement Supported: Not Supported 00:09:08.059 00:09:08.059 Controller Memory Buffer Support 00:09:08.059 ================================ 00:09:08.059 Supported: No 00:09:08.059 00:09:08.059 Persistent Memory Region Support 00:09:08.059 ================================ 00:09:08.059 Supported: No 00:09:08.059 00:09:08.059 Admin Command Set Attributes 00:09:08.059 ============================ 00:09:08.059 Security Send/Receive: Not Supported 00:09:08.059 Format NVM: Supported 00:09:08.059 Firmware Activate/Download: Not Supported 00:09:08.059 Namespace Management: Supported 00:09:08.059 Device Self-Test: Not Supported 00:09:08.059 Directives: Supported 00:09:08.059 NVMe-MI: Not Supported 00:09:08.059 Virtualization Management: Not Supported 00:09:08.059 Doorbell Buffer Config: Supported 00:09:08.059 Get LBA Status Capability: Not Supported 00:09:08.059 Command & Feature Lockdown Capability: Not Supported 00:09:08.059 Abort Command Limit: 4 00:09:08.059 Async Event Request Limit: 4 00:09:08.059 Number of Firmware Slots: N/A 00:09:08.059 Firmware Slot 1 Read-Only: N/A 00:09:08.059 Firmware Activation Without Reset: N/A 00:09:08.059 Multiple Update Detection Support: N/A 00:09:08.059 Firmware Update Granularity: No Information Provided 00:09:08.059 Per-Namespace SMART Log: Yes 00:09:08.059 Asymmetric Namespace Access Log Page: Not Supported 00:09:08.059 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:08.059 Command Effects Log Page: Supported 00:09:08.059 Get Log Page Extended Data: Supported 00:09:08.059 Telemetry Log Pages: Not Supported 00:09:08.059 Persistent Event Log Pages: Not Supported 00:09:08.059 Supported Log Pages Log Page: May Support 00:09:08.059 Commands Supported & Effects Log Page: Not Supported 00:09:08.059 Feature Identifiers & Effects Log Page:May Support 00:09:08.059 NVMe-MI Commands & Effects Log Page: May Support 00:09:08.059 Data Area 4 for Telemetry Log: Not Supported 00:09:08.059 Error Log Page Entries Supported: 1 00:09:08.059 Keep Alive: Not Supported 00:09:08.059 00:09:08.059 NVM Command Set Attributes 00:09:08.059 ========================== 00:09:08.059 Submission Queue Entry Size 00:09:08.059 Max: 64 00:09:08.059 Min: 64 00:09:08.059 Completion Queue Entry Size 00:09:08.059 Max: 16 00:09:08.059 Min: 16 00:09:08.059 Number of Namespaces: 256 00:09:08.059 Compare Command: Supported 00:09:08.059 Write Uncorrectable Command: Not Supported 00:09:08.059 Dataset Management Command: Supported 00:09:08.059 Write Zeroes Command: Supported 00:09:08.059 Set Features Save Field: Supported 00:09:08.059 Reservations: Not Supported 00:09:08.059 Timestamp: Supported 00:09:08.059 Copy: Supported 00:09:08.059 Volatile Write Cache: Present 00:09:08.059 Atomic Write Unit (Normal): 1 00:09:08.059 Atomic Write Unit (PFail): 1 00:09:08.059 Atomic Compare & Write Unit: 1 00:09:08.059 Fused Compare & Write: Not Supported 00:09:08.059 Scatter-Gather List 00:09:08.059 SGL Command Set: Supported 00:09:08.059 SGL Keyed: Not Supported 00:09:08.059 SGL Bit Bucket Descriptor: Not Supported 00:09:08.059 SGL Metadata Pointer: Not Supported 00:09:08.059 Oversized SGL: Not Supported 00:09:08.059 SGL Metadata Address: Not Supported 00:09:08.059 SGL Offset: Not Supported 00:09:08.059 Transport SGL Data Block: Not Supported 00:09:08.059 Replay Protected Memory Block: Not Supported 00:09:08.059 00:09:08.059 Firmware Slot Information 00:09:08.059 ========================= 00:09:08.059 Active slot: 1 00:09:08.059 Slot 1 Firmware Revision: 1.0 00:09:08.059 00:09:08.059 00:09:08.059 Commands Supported and Effects 00:09:08.059 ============================== 00:09:08.059 Admin Commands 00:09:08.059 -------------- 00:09:08.059 Delete I/O Submission Queue (00h): Supported 00:09:08.059 Create I/O Submission Queue (01h): Supported 00:09:08.059 Get Log Page (02h): Supported 00:09:08.059 Delete I/O Completion Queue (04h): Supported 00:09:08.059 Create I/O Completion Queue (05h): Supported 00:09:08.059 Identify (06h): Supported 00:09:08.059 Abort (08h): Supported 00:09:08.059 Set Features (09h): Supported 00:09:08.059 Get Features (0Ah): Supported 00:09:08.059 Asynchronous Event Request (0Ch): Supported 00:09:08.059 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:08.059 Directive Send (19h): Supported 00:09:08.059 Directive Receive (1Ah): Supported 00:09:08.059 Virtualization Management (1Ch): Supported 00:09:08.059 Doorbell Buffer Config (7Ch): Supported 00:09:08.059 Format NVM (80h): Supported LBA-Change 00:09:08.059 I/O Commands 00:09:08.059 ------------ 00:09:08.059 Flush (00h): Supported LBA-Change 00:09:08.059 Write (01h): Supported LBA-Change 00:09:08.059 Read (02h): Supported 00:09:08.059 Compare (05h): Supported 00:09:08.059 Write Zeroes (08h): Supported LBA-Change 00:09:08.059 Dataset Management (09h): Supported LBA-Change 00:09:08.059 Unknown (0Ch): Supported 00:09:08.059 Unknown (12h): Supported 00:09:08.059 Copy (19h): Supported LBA-Change 00:09:08.059 Unknown (1Dh): Supported LBA-Change 00:09:08.059 00:09:08.059 Error Log 00:09:08.059 ========= 00:09:08.059 00:09:08.059 Arbitration 00:09:08.059 =========== 00:09:08.059 Arbitration Burst: no limit 00:09:08.059 00:09:08.059 Power Management 00:09:08.059 ================ 00:09:08.059 Number of Power States: 1 00:09:08.059 Current Power State: Power State #0 00:09:08.059 Power State #0: 00:09:08.059 Max Power: 25.00 W 00:09:08.059 Non-Operational State: Operational 00:09:08.059 Entry Latency: 16 microseconds 00:09:08.059 Exit Latency: 4 microseconds 00:09:08.059 Relative Read Throughput: 0 00:09:08.059 Relative Read Latency: 0 00:09:08.059 Relative Write Throughput: 0 00:09:08.059 Relative Write Latency: 0 00:09:08.059 Idle Power: Not Reported 00:09:08.059 Active Power: Not Reported 00:09:08.059 Non-Operational Permissive Mode: Not Supported 00:09:08.059 00:09:08.059 Health Information 00:09:08.059 ================== 00:09:08.059 Critical Warnings: 00:09:08.059 Available Spare Space: OK 00:09:08.059 Temperature: OK 00:09:08.059 Device Reliability: OK 00:09:08.059 Read Only: No 00:09:08.059 Volatile Memory Backup: OK 00:09:08.059 Current Temperature: 323 Kelvin (50 Celsius) 00:09:08.059 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:08.059 Available Spare: 0% 00:09:08.059 Available Spare Threshold: 0% 00:09:08.059 Life Percentage Used: 0% 00:09:08.059 Data Units Read: 1761 00:09:08.059 Data Units Written: 812 00:09:08.059 Host Read Commands: 85733 00:09:08.059 Host Write Commands: 42544 00:09:08.059 Controller Busy Time: 0 minutes 00:09:08.059 Power Cycles: 0 00:09:08.059 Power On Hours: 0 hours 00:09:08.059 Unsafe Shutdowns: 0 00:09:08.059 Unrecoverable Media Errors: 0 00:09:08.059 Lifetime Error Log Entries: 0 00:09:08.059 Warning Temperature Time: 0 minutes 00:09:08.059 Critical Temperature Time: 0 minutes 00:09:08.059 00:09:08.059 Number of Queues 00:09:08.059 ================ 00:09:08.059 Number of I/O Submission Queues: 64 00:09:08.059 Number of I/O Completion Queues: 64 00:09:08.059 00:09:08.059 ZNS Specific Controller Data 00:09:08.059 ============================ 00:09:08.059 Zone Append Size Limit: 0 00:09:08.059 00:09:08.059 00:09:08.059 Active Namespaces 00:09:08.059 ================= 00:09:08.059 Namespace ID:1 00:09:08.059 Error Recovery Timeout: Unlimited 00:09:08.059 Command Set Identifier: NVM (00h) 00:09:08.059 Deallocate: Supported 00:09:08.059 Deallocated/Unwritten Error: Supported 00:09:08.059 Deallocated Read Value: All 0x00 00:09:08.059 Deallocate in Write Zeroes: Not Supported 00:09:08.059 Deallocated Guard Field: 0xFFFF 00:09:08.059 Flush: Supported 00:09:08.059 Reservation: Not Supported 00:09:08.059 Metadata Transferred as: Separate Metadata Buffer 00:09:08.059 Namespace Sharing Capabilities: Private 00:09:08.059 Size (in LBAs): 1548666 (5GiB) 00:09:08.059 Capacity (in LBAs): 1548666 (5GiB) 00:09:08.059 Utilization (in LBAs): 1548666 (5GiB) 00:09:08.059 Thin Provisioning: Not Supported 00:09:08.059 Per-NS Atomic Units: No 00:09:08.059 Maximum Single Source Range Length: 128 00:09:08.059 Maximum Copy Length: 128 00:09:08.059 Maximum Source Range Count: 128 00:09:08.059 NGUID/EUI64 Never Reused: No 00:09:08.059 Namespace Write Protected: No 00:09:08.059 Number of LBA Formats: 8 00:09:08.059 Current LBA Format: LBA Format #07 00:09:08.059 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:08.059 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:08.059 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:08.059 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:08.059 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:08.059 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:08.059 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:08.059 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:08.059 00:09:08.059 06:34:18 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:08.059 06:34:18 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' -i 0 00:09:08.321 ===================================================== 00:09:08.321 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:08.321 ===================================================== 00:09:08.321 Controller Capabilities/Features 00:09:08.321 ================================ 00:09:08.321 Vendor ID: 1b36 00:09:08.321 Subsystem Vendor ID: 1af4 00:09:08.321 Serial Number: 12341 00:09:08.321 Model Number: QEMU NVMe Ctrl 00:09:08.321 Firmware Version: 8.0.0 00:09:08.321 Recommended Arb Burst: 6 00:09:08.321 IEEE OUI Identifier: 00 54 52 00:09:08.321 Multi-path I/O 00:09:08.321 May have multiple subsystem ports: No 00:09:08.321 May have multiple controllers: No 00:09:08.321 Associated with SR-IOV VF: No 00:09:08.321 Max Data Transfer Size: 524288 00:09:08.321 Max Number of Namespaces: 256 00:09:08.321 Max Number of I/O Queues: 64 00:09:08.321 NVMe Specification Version (VS): 1.4 00:09:08.321 NVMe Specification Version (Identify): 1.4 00:09:08.321 Maximum Queue Entries: 2048 00:09:08.321 Contiguous Queues Required: Yes 00:09:08.321 Arbitration Mechanisms Supported 00:09:08.321 Weighted Round Robin: Not Supported 00:09:08.321 Vendor Specific: Not Supported 00:09:08.321 Reset Timeout: 7500 ms 00:09:08.321 Doorbell Stride: 4 bytes 00:09:08.321 NVM Subsystem Reset: Not Supported 00:09:08.321 Command Sets Supported 00:09:08.321 NVM Command Set: Supported 00:09:08.321 Boot Partition: Not Supported 00:09:08.321 Memory Page Size Minimum: 4096 bytes 00:09:08.321 Memory Page Size Maximum: 65536 bytes 00:09:08.321 Persistent Memory Region: Not Supported 00:09:08.321 Optional Asynchronous Events Supported 00:09:08.321 Namespace Attribute Notices: Supported 00:09:08.321 Firmware Activation Notices: Not Supported 00:09:08.321 ANA Change Notices: Not Supported 00:09:08.321 PLE Aggregate Log Change Notices: Not Supported 00:09:08.321 LBA Status Info Alert Notices: Not Supported 00:09:08.321 EGE Aggregate Log Change Notices: Not Supported 00:09:08.321 Normal NVM Subsystem Shutdown event: Not Supported 00:09:08.321 Zone Descriptor Change Notices: Not Supported 00:09:08.321 Discovery Log Change Notices: Not Supported 00:09:08.321 Controller Attributes 00:09:08.321 128-bit Host Identifier: Not Supported 00:09:08.321 Non-Operational Permissive Mode: Not Supported 00:09:08.321 NVM Sets: Not Supported 00:09:08.321 Read Recovery Levels: Not Supported 00:09:08.321 Endurance Groups: Not Supported 00:09:08.321 Predictable Latency Mode: Not Supported 00:09:08.321 Traffic Based Keep ALive: Not Supported 00:09:08.321 Namespace Granularity: Not Supported 00:09:08.321 SQ Associations: Not Supported 00:09:08.321 UUID List: Not Supported 00:09:08.321 Multi-Domain Subsystem: Not Supported 00:09:08.321 Fixed Capacity Management: Not Supported 00:09:08.321 Variable Capacity Management: Not Supported 00:09:08.321 Delete Endurance Group: Not Supported 00:09:08.321 Delete NVM Set: Not Supported 00:09:08.321 Extended LBA Formats Supported: Supported 00:09:08.321 Flexible Data Placement Supported: Not Supported 00:09:08.321 00:09:08.321 Controller Memory Buffer Support 00:09:08.321 ================================ 00:09:08.321 Supported: No 00:09:08.321 00:09:08.321 Persistent Memory Region Support 00:09:08.321 ================================ 00:09:08.321 Supported: No 00:09:08.321 00:09:08.321 Admin Command Set Attributes 00:09:08.321 ============================ 00:09:08.321 Security Send/Receive: Not Supported 00:09:08.321 Format NVM: Supported 00:09:08.321 Firmware Activate/Download: Not Supported 00:09:08.321 Namespace Management: Supported 00:09:08.321 Device Self-Test: Not Supported 00:09:08.321 Directives: Supported 00:09:08.321 NVMe-MI: Not Supported 00:09:08.321 Virtualization Management: Not Supported 00:09:08.321 Doorbell Buffer Config: Supported 00:09:08.321 Get LBA Status Capability: Not Supported 00:09:08.321 Command & Feature Lockdown Capability: Not Supported 00:09:08.321 Abort Command Limit: 4 00:09:08.321 Async Event Request Limit: 4 00:09:08.321 Number of Firmware Slots: N/A 00:09:08.321 Firmware Slot 1 Read-Only: N/A 00:09:08.321 Firmware Activation Without Reset: N/A 00:09:08.322 Multiple Update Detection Support: N/A 00:09:08.322 Firmware Update Granularity: No Information Provided 00:09:08.322 Per-Namespace SMART Log: Yes 00:09:08.322 Asymmetric Namespace Access Log Page: Not Supported 00:09:08.322 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:08.322 Command Effects Log Page: Supported 00:09:08.322 Get Log Page Extended Data: Supported 00:09:08.322 Telemetry Log Pages: Not Supported 00:09:08.322 Persistent Event Log Pages: Not Supported 00:09:08.322 Supported Log Pages Log Page: May Support 00:09:08.322 Commands Supported & Effects Log Page: Not Supported 00:09:08.322 Feature Identifiers & Effects Log Page:May Support 00:09:08.322 NVMe-MI Commands & Effects Log Page: May Support 00:09:08.322 Data Area 4 for Telemetry Log: Not Supported 00:09:08.322 Error Log Page Entries Supported: 1 00:09:08.322 Keep Alive: Not Supported 00:09:08.322 00:09:08.322 NVM Command Set Attributes 00:09:08.322 ========================== 00:09:08.322 Submission Queue Entry Size 00:09:08.322 Max: 64 00:09:08.322 Min: 64 00:09:08.322 Completion Queue Entry Size 00:09:08.322 Max: 16 00:09:08.322 Min: 16 00:09:08.322 Number of Namespaces: 256 00:09:08.322 Compare Command: Supported 00:09:08.322 Write Uncorrectable Command: Not Supported 00:09:08.322 Dataset Management Command: Supported 00:09:08.322 Write Zeroes Command: Supported 00:09:08.322 Set Features Save Field: Supported 00:09:08.322 Reservations: Not Supported 00:09:08.322 Timestamp: Supported 00:09:08.322 Copy: Supported 00:09:08.322 Volatile Write Cache: Present 00:09:08.322 Atomic Write Unit (Normal): 1 00:09:08.322 Atomic Write Unit (PFail): 1 00:09:08.322 Atomic Compare & Write Unit: 1 00:09:08.322 Fused Compare & Write: Not Supported 00:09:08.322 Scatter-Gather List 00:09:08.322 SGL Command Set: Supported 00:09:08.322 SGL Keyed: Not Supported 00:09:08.322 SGL Bit Bucket Descriptor: Not Supported 00:09:08.322 SGL Metadata Pointer: Not Supported 00:09:08.322 Oversized SGL: Not Supported 00:09:08.322 SGL Metadata Address: Not Supported 00:09:08.322 SGL Offset: Not Supported 00:09:08.322 Transport SGL Data Block: Not Supported 00:09:08.322 Replay Protected Memory Block: Not Supported 00:09:08.322 00:09:08.322 Firmware Slot Information 00:09:08.322 ========================= 00:09:08.322 Active slot: 1 00:09:08.322 Slot 1 Firmware Revision: 1.0 00:09:08.322 00:09:08.322 00:09:08.322 Commands Supported and Effects 00:09:08.322 ============================== 00:09:08.322 Admin Commands 00:09:08.322 -------------- 00:09:08.322 Delete I/O Submission Queue (00h): Supported 00:09:08.322 Create I/O Submission Queue (01h): Supported 00:09:08.322 Get Log Page (02h): Supported 00:09:08.322 Delete I/O Completion Queue (04h): Supported 00:09:08.322 Create I/O Completion Queue (05h): Supported 00:09:08.322 Identify (06h): Supported 00:09:08.322 Abort (08h): Supported 00:09:08.322 Set Features (09h): Supported 00:09:08.322 Get Features (0Ah): Supported 00:09:08.322 Asynchronous Event Request (0Ch): Supported 00:09:08.322 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:08.322 Directive Send (19h): Supported 00:09:08.322 Directive Receive (1Ah): Supported 00:09:08.322 Virtualization Management (1Ch): Supported 00:09:08.322 Doorbell Buffer Config (7Ch): Supported 00:09:08.322 Format NVM (80h): Supported LBA-Change 00:09:08.322 I/O Commands 00:09:08.322 ------------ 00:09:08.322 Flush (00h): Supported LBA-Change 00:09:08.322 Write (01h): Supported LBA-Change 00:09:08.322 Read (02h): Supported 00:09:08.322 Compare (05h): Supported 00:09:08.322 Write Zeroes (08h): Supported LBA-Change 00:09:08.322 Dataset Management (09h): Supported LBA-Change 00:09:08.322 Unknown (0Ch): Supported 00:09:08.322 Unknown (12h): Supported 00:09:08.322 Copy (19h): Supported LBA-Change 00:09:08.322 Unknown (1Dh): Supported LBA-Change 00:09:08.322 00:09:08.322 Error Log 00:09:08.322 ========= 00:09:08.322 00:09:08.322 Arbitration 00:09:08.322 =========== 00:09:08.322 Arbitration Burst: no limit 00:09:08.322 00:09:08.322 Power Management 00:09:08.322 ================ 00:09:08.322 Number of Power States: 1 00:09:08.322 Current Power State: Power State #0 00:09:08.322 Power State #0: 00:09:08.322 Max Power: 25.00 W 00:09:08.322 Non-Operational State: Operational 00:09:08.322 Entry Latency: 16 microseconds 00:09:08.322 Exit Latency: 4 microseconds 00:09:08.322 Relative Read Throughput: 0 00:09:08.322 Relative Read Latency: 0 00:09:08.322 Relative Write Throughput: 0 00:09:08.322 Relative Write Latency: 0 00:09:08.322 Idle Power: Not Reported 00:09:08.322 Active Power: Not Reported 00:09:08.322 Non-Operational Permissive Mode: Not Supported 00:09:08.322 00:09:08.322 Health Information 00:09:08.322 ================== 00:09:08.322 Critical Warnings: 00:09:08.322 Available Spare Space: OK 00:09:08.322 Temperature: OK 00:09:08.322 Device Reliability: OK 00:09:08.322 Read Only: No 00:09:08.322 Volatile Memory Backup: OK 00:09:08.322 Current Temperature: 323 Kelvin (50 Celsius) 00:09:08.322 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:08.322 Available Spare: 0% 00:09:08.322 Available Spare Threshold: 0% 00:09:08.322 Life Percentage Used: 0% 00:09:08.322 Data Units Read: 1164 00:09:08.322 Data Units Written: 539 00:09:08.322 Host Read Commands: 56769 00:09:08.322 Host Write Commands: 27878 00:09:08.322 Controller Busy Time: 0 minutes 00:09:08.322 Power Cycles: 0 00:09:08.322 Power On Hours: 0 hours 00:09:08.322 Unsafe Shutdowns: 0 00:09:08.322 Unrecoverable Media Errors: 0 00:09:08.322 Lifetime Error Log Entries: 0 00:09:08.322 Warning Temperature Time: 0 minutes 00:09:08.322 Critical Temperature Time: 0 minutes 00:09:08.322 00:09:08.322 Number of Queues 00:09:08.322 ================ 00:09:08.322 Number of I/O Submission Queues: 64 00:09:08.322 Number of I/O Completion Queues: 64 00:09:08.322 00:09:08.322 ZNS Specific Controller Data 00:09:08.322 ============================ 00:09:08.322 Zone Append Size Limit: 0 00:09:08.322 00:09:08.322 00:09:08.322 Active Namespaces 00:09:08.322 ================= 00:09:08.322 Namespace ID:1 00:09:08.323 Error Recovery Timeout: Unlimited 00:09:08.323 Command Set Identifier: NVM (00h) 00:09:08.323 Deallocate: Supported 00:09:08.323 Deallocated/Unwritten Error: Supported 00:09:08.323 Deallocated Read Value: All 0x00 00:09:08.323 Deallocate in Write Zeroes: Not Supported 00:09:08.323 Deallocated Guard Field: 0xFFFF 00:09:08.323 Flush: Supported 00:09:08.323 Reservation: Not Supported 00:09:08.323 Namespace Sharing Capabilities: Private 00:09:08.323 Size (in LBAs): 1310720 (5GiB) 00:09:08.323 Capacity (in LBAs): 1310720 (5GiB) 00:09:08.323 Utilization (in LBAs): 1310720 (5GiB) 00:09:08.323 Thin Provisioning: Not Supported 00:09:08.323 Per-NS Atomic Units: No 00:09:08.323 Maximum Single Source Range Length: 128 00:09:08.323 Maximum Copy Length: 128 00:09:08.323 Maximum Source Range Count: 128 00:09:08.323 NGUID/EUI64 Never Reused: No 00:09:08.323 Namespace Write Protected: No 00:09:08.323 Number of LBA Formats: 8 00:09:08.323 Current LBA Format: LBA Format #04 00:09:08.323 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:08.323 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:08.323 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:08.323 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:08.323 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:08.323 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:08.323 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:08.323 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:08.323 00:09:08.323 06:34:19 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:08.323 06:34:19 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' -i 0 00:09:08.585 ===================================================== 00:09:08.585 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:08.585 ===================================================== 00:09:08.585 Controller Capabilities/Features 00:09:08.585 ================================ 00:09:08.585 Vendor ID: 1b36 00:09:08.585 Subsystem Vendor ID: 1af4 00:09:08.585 Serial Number: 12342 00:09:08.585 Model Number: QEMU NVMe Ctrl 00:09:08.585 Firmware Version: 8.0.0 00:09:08.585 Recommended Arb Burst: 6 00:09:08.585 IEEE OUI Identifier: 00 54 52 00:09:08.585 Multi-path I/O 00:09:08.585 May have multiple subsystem ports: No 00:09:08.585 May have multiple controllers: No 00:09:08.585 Associated with SR-IOV VF: No 00:09:08.585 Max Data Transfer Size: 524288 00:09:08.585 Max Number of Namespaces: 256 00:09:08.585 Max Number of I/O Queues: 64 00:09:08.585 NVMe Specification Version (VS): 1.4 00:09:08.585 NVMe Specification Version (Identify): 1.4 00:09:08.585 Maximum Queue Entries: 2048 00:09:08.585 Contiguous Queues Required: Yes 00:09:08.585 Arbitration Mechanisms Supported 00:09:08.585 Weighted Round Robin: Not Supported 00:09:08.585 Vendor Specific: Not Supported 00:09:08.585 Reset Timeout: 7500 ms 00:09:08.585 Doorbell Stride: 4 bytes 00:09:08.585 NVM Subsystem Reset: Not Supported 00:09:08.585 Command Sets Supported 00:09:08.585 NVM Command Set: Supported 00:09:08.585 Boot Partition: Not Supported 00:09:08.585 Memory Page Size Minimum: 4096 bytes 00:09:08.585 Memory Page Size Maximum: 65536 bytes 00:09:08.585 Persistent Memory Region: Not Supported 00:09:08.585 Optional Asynchronous Events Supported 00:09:08.585 Namespace Attribute Notices: Supported 00:09:08.585 Firmware Activation Notices: Not Supported 00:09:08.585 ANA Change Notices: Not Supported 00:09:08.585 PLE Aggregate Log Change Notices: Not Supported 00:09:08.585 LBA Status Info Alert Notices: Not Supported 00:09:08.585 EGE Aggregate Log Change Notices: Not Supported 00:09:08.585 Normal NVM Subsystem Shutdown event: Not Supported 00:09:08.585 Zone Descriptor Change Notices: Not Supported 00:09:08.585 Discovery Log Change Notices: Not Supported 00:09:08.585 Controller Attributes 00:09:08.585 128-bit Host Identifier: Not Supported 00:09:08.585 Non-Operational Permissive Mode: Not Supported 00:09:08.585 NVM Sets: Not Supported 00:09:08.585 Read Recovery Levels: Not Supported 00:09:08.585 Endurance Groups: Not Supported 00:09:08.585 Predictable Latency Mode: Not Supported 00:09:08.586 Traffic Based Keep ALive: Not Supported 00:09:08.586 Namespace Granularity: Not Supported 00:09:08.586 SQ Associations: Not Supported 00:09:08.586 UUID List: Not Supported 00:09:08.586 Multi-Domain Subsystem: Not Supported 00:09:08.586 Fixed Capacity Management: Not Supported 00:09:08.586 Variable Capacity Management: Not Supported 00:09:08.586 Delete Endurance Group: Not Supported 00:09:08.586 Delete NVM Set: Not Supported 00:09:08.586 Extended LBA Formats Supported: Supported 00:09:08.586 Flexible Data Placement Supported: Not Supported 00:09:08.586 00:09:08.586 Controller Memory Buffer Support 00:09:08.586 ================================ 00:09:08.586 Supported: No 00:09:08.586 00:09:08.586 Persistent Memory Region Support 00:09:08.586 ================================ 00:09:08.586 Supported: No 00:09:08.586 00:09:08.586 Admin Command Set Attributes 00:09:08.586 ============================ 00:09:08.586 Security Send/Receive: Not Supported 00:09:08.586 Format NVM: Supported 00:09:08.586 Firmware Activate/Download: Not Supported 00:09:08.586 Namespace Management: Supported 00:09:08.586 Device Self-Test: Not Supported 00:09:08.586 Directives: Supported 00:09:08.586 NVMe-MI: Not Supported 00:09:08.586 Virtualization Management: Not Supported 00:09:08.586 Doorbell Buffer Config: Supported 00:09:08.586 Get LBA Status Capability: Not Supported 00:09:08.586 Command & Feature Lockdown Capability: Not Supported 00:09:08.586 Abort Command Limit: 4 00:09:08.586 Async Event Request Limit: 4 00:09:08.586 Number of Firmware Slots: N/A 00:09:08.586 Firmware Slot 1 Read-Only: N/A 00:09:08.586 Firmware Activation Without Reset: N/A 00:09:08.586 Multiple Update Detection Support: N/A 00:09:08.586 Firmware Update Granularity: No Information Provided 00:09:08.586 Per-Namespace SMART Log: Yes 00:09:08.586 Asymmetric Namespace Access Log Page: Not Supported 00:09:08.586 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:08.586 Command Effects Log Page: Supported 00:09:08.586 Get Log Page Extended Data: Supported 00:09:08.586 Telemetry Log Pages: Not Supported 00:09:08.586 Persistent Event Log Pages: Not Supported 00:09:08.586 Supported Log Pages Log Page: May Support 00:09:08.586 Commands Supported & Effects Log Page: Not Supported 00:09:08.586 Feature Identifiers & Effects Log Page:May Support 00:09:08.586 NVMe-MI Commands & Effects Log Page: May Support 00:09:08.586 Data Area 4 for Telemetry Log: Not Supported 00:09:08.586 Error Log Page Entries Supported: 1 00:09:08.586 Keep Alive: Not Supported 00:09:08.586 00:09:08.586 NVM Command Set Attributes 00:09:08.586 ========================== 00:09:08.586 Submission Queue Entry Size 00:09:08.586 Max: 64 00:09:08.586 Min: 64 00:09:08.586 Completion Queue Entry Size 00:09:08.586 Max: 16 00:09:08.586 Min: 16 00:09:08.586 Number of Namespaces: 256 00:09:08.586 Compare Command: Supported 00:09:08.586 Write Uncorrectable Command: Not Supported 00:09:08.586 Dataset Management Command: Supported 00:09:08.586 Write Zeroes Command: Supported 00:09:08.586 Set Features Save Field: Supported 00:09:08.586 Reservations: Not Supported 00:09:08.586 Timestamp: Supported 00:09:08.586 Copy: Supported 00:09:08.586 Volatile Write Cache: Present 00:09:08.586 Atomic Write Unit (Normal): 1 00:09:08.586 Atomic Write Unit (PFail): 1 00:09:08.586 Atomic Compare & Write Unit: 1 00:09:08.586 Fused Compare & Write: Not Supported 00:09:08.586 Scatter-Gather List 00:09:08.586 SGL Command Set: Supported 00:09:08.586 SGL Keyed: Not Supported 00:09:08.586 SGL Bit Bucket Descriptor: Not Supported 00:09:08.586 SGL Metadata Pointer: Not Supported 00:09:08.586 Oversized SGL: Not Supported 00:09:08.586 SGL Metadata Address: Not Supported 00:09:08.586 SGL Offset: Not Supported 00:09:08.586 Transport SGL Data Block: Not Supported 00:09:08.586 Replay Protected Memory Block: Not Supported 00:09:08.586 00:09:08.586 Firmware Slot Information 00:09:08.586 ========================= 00:09:08.586 Active slot: 1 00:09:08.586 Slot 1 Firmware Revision: 1.0 00:09:08.586 00:09:08.586 00:09:08.586 Commands Supported and Effects 00:09:08.586 ============================== 00:09:08.586 Admin Commands 00:09:08.586 -------------- 00:09:08.586 Delete I/O Submission Queue (00h): Supported 00:09:08.586 Create I/O Submission Queue (01h): Supported 00:09:08.586 Get Log Page (02h): Supported 00:09:08.586 Delete I/O Completion Queue (04h): Supported 00:09:08.586 Create I/O Completion Queue (05h): Supported 00:09:08.586 Identify (06h): Supported 00:09:08.586 Abort (08h): Supported 00:09:08.586 Set Features (09h): Supported 00:09:08.586 Get Features (0Ah): Supported 00:09:08.586 Asynchronous Event Request (0Ch): Supported 00:09:08.586 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:08.586 Directive Send (19h): Supported 00:09:08.586 Directive Receive (1Ah): Supported 00:09:08.586 Virtualization Management (1Ch): Supported 00:09:08.586 Doorbell Buffer Config (7Ch): Supported 00:09:08.586 Format NVM (80h): Supported LBA-Change 00:09:08.586 I/O Commands 00:09:08.586 ------------ 00:09:08.586 Flush (00h): Supported LBA-Change 00:09:08.586 Write (01h): Supported LBA-Change 00:09:08.586 Read (02h): Supported 00:09:08.586 Compare (05h): Supported 00:09:08.586 Write Zeroes (08h): Supported LBA-Change 00:09:08.586 Dataset Management (09h): Supported LBA-Change 00:09:08.586 Unknown (0Ch): Supported 00:09:08.586 Unknown (12h): Supported 00:09:08.586 Copy (19h): Supported LBA-Change 00:09:08.586 Unknown (1Dh): Supported LBA-Change 00:09:08.586 00:09:08.586 Error Log 00:09:08.586 ========= 00:09:08.586 00:09:08.586 Arbitration 00:09:08.586 =========== 00:09:08.586 Arbitration Burst: no limit 00:09:08.586 00:09:08.586 Power Management 00:09:08.586 ================ 00:09:08.586 Number of Power States: 1 00:09:08.586 Current Power State: Power State #0 00:09:08.586 Power State #0: 00:09:08.586 Max Power: 25.00 W 00:09:08.586 Non-Operational State: Operational 00:09:08.586 Entry Latency: 16 microseconds 00:09:08.586 Exit Latency: 4 microseconds 00:09:08.586 Relative Read Throughput: 0 00:09:08.586 Relative Read Latency: 0 00:09:08.586 Relative Write Throughput: 0 00:09:08.586 Relative Write Latency: 0 00:09:08.586 Idle Power: Not Reported 00:09:08.586 Active Power: Not Reported 00:09:08.586 Non-Operational Permissive Mode: Not Supported 00:09:08.586 00:09:08.586 Health Information 00:09:08.586 ================== 00:09:08.586 Critical Warnings: 00:09:08.586 Available Spare Space: OK 00:09:08.586 Temperature: OK 00:09:08.586 Device Reliability: OK 00:09:08.586 Read Only: No 00:09:08.586 Volatile Memory Backup: OK 00:09:08.586 Current Temperature: 323 Kelvin (50 Celsius) 00:09:08.586 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:08.586 Available Spare: 0% 00:09:08.586 Available Spare Threshold: 0% 00:09:08.586 Life Percentage Used: 0% 00:09:08.586 Data Units Read: 3620 00:09:08.586 Data Units Written: 1671 00:09:08.586 Host Read Commands: 171874 00:09:08.586 Host Write Commands: 84211 00:09:08.586 Controller Busy Time: 0 minutes 00:09:08.586 Power Cycles: 0 00:09:08.586 Power On Hours: 0 hours 00:09:08.586 Unsafe Shutdowns: 0 00:09:08.586 Unrecoverable Media Errors: 0 00:09:08.586 Lifetime Error Log Entries: 0 00:09:08.586 Warning Temperature Time: 0 minutes 00:09:08.586 Critical Temperature Time: 0 minutes 00:09:08.586 00:09:08.586 Number of Queues 00:09:08.586 ================ 00:09:08.586 Number of I/O Submission Queues: 64 00:09:08.586 Number of I/O Completion Queues: 64 00:09:08.586 00:09:08.586 ZNS Specific Controller Data 00:09:08.586 ============================ 00:09:08.586 Zone Append Size Limit: 0 00:09:08.586 00:09:08.586 00:09:08.586 Active Namespaces 00:09:08.586 ================= 00:09:08.586 Namespace ID:1 00:09:08.586 Error Recovery Timeout: Unlimited 00:09:08.586 Command Set Identifier: NVM (00h) 00:09:08.586 Deallocate: Supported 00:09:08.586 Deallocated/Unwritten Error: Supported 00:09:08.586 Deallocated Read Value: All 0x00 00:09:08.586 Deallocate in Write Zeroes: Not Supported 00:09:08.586 Deallocated Guard Field: 0xFFFF 00:09:08.586 Flush: Supported 00:09:08.586 Reservation: Not Supported 00:09:08.586 Namespace Sharing Capabilities: Private 00:09:08.586 Size (in LBAs): 1048576 (4GiB) 00:09:08.586 Capacity (in LBAs): 1048576 (4GiB) 00:09:08.586 Utilization (in LBAs): 1048576 (4GiB) 00:09:08.586 Thin Provisioning: Not Supported 00:09:08.586 Per-NS Atomic Units: No 00:09:08.587 Maximum Single Source Range Length: 128 00:09:08.587 Maximum Copy Length: 128 00:09:08.587 Maximum Source Range Count: 128 00:09:08.587 NGUID/EUI64 Never Reused: No 00:09:08.587 Namespace Write Protected: No 00:09:08.587 Number of LBA Formats: 8 00:09:08.587 Current LBA Format: LBA Format #04 00:09:08.587 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:08.587 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:08.587 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:08.587 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:08.587 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:08.587 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:08.587 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:08.587 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:08.587 00:09:08.587 Namespace ID:2 00:09:08.587 Error Recovery Timeout: Unlimited 00:09:08.587 Command Set Identifier: NVM (00h) 00:09:08.587 Deallocate: Supported 00:09:08.587 Deallocated/Unwritten Error: Supported 00:09:08.587 Deallocated Read Value: All 0x00 00:09:08.587 Deallocate in Write Zeroes: Not Supported 00:09:08.587 Deallocated Guard Field: 0xFFFF 00:09:08.587 Flush: Supported 00:09:08.587 Reservation: Not Supported 00:09:08.587 Namespace Sharing Capabilities: Private 00:09:08.587 Size (in LBAs): 1048576 (4GiB) 00:09:08.587 Capacity (in LBAs): 1048576 (4GiB) 00:09:08.587 Utilization (in LBAs): 1048576 (4GiB) 00:09:08.587 Thin Provisioning: Not Supported 00:09:08.587 Per-NS Atomic Units: No 00:09:08.587 Maximum Single Source Range Length: 128 00:09:08.587 Maximum Copy Length: 128 00:09:08.587 Maximum Source Range Count: 128 00:09:08.587 NGUID/EUI64 Never Reused: No 00:09:08.587 Namespace Write Protected: No 00:09:08.587 Number of LBA Formats: 8 00:09:08.587 Current LBA Format: LBA Format #04 00:09:08.587 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:08.587 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:08.587 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:08.587 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:08.587 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:08.587 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:08.587 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:08.587 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:08.587 00:09:08.587 Namespace ID:3 00:09:08.587 Error Recovery Timeout: Unlimited 00:09:08.587 Command Set Identifier: NVM (00h) 00:09:08.587 Deallocate: Supported 00:09:08.587 Deallocated/Unwritten Error: Supported 00:09:08.587 Deallocated Read Value: All 0x00 00:09:08.587 Deallocate in Write Zeroes: Not Supported 00:09:08.587 Deallocated Guard Field: 0xFFFF 00:09:08.587 Flush: Supported 00:09:08.587 Reservation: Not Supported 00:09:08.587 Namespace Sharing Capabilities: Private 00:09:08.587 Size (in LBAs): 1048576 (4GiB) 00:09:08.587 Capacity (in LBAs): 1048576 (4GiB) 00:09:08.587 Utilization (in LBAs): 1048576 (4GiB) 00:09:08.587 Thin Provisioning: Not Supported 00:09:08.587 Per-NS Atomic Units: No 00:09:08.587 Maximum Single Source Range Length: 128 00:09:08.587 Maximum Copy Length: 128 00:09:08.587 Maximum Source Range Count: 128 00:09:08.587 NGUID/EUI64 Never Reused: No 00:09:08.587 Namespace Write Protected: No 00:09:08.587 Number of LBA Formats: 8 00:09:08.587 Current LBA Format: LBA Format #04 00:09:08.587 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:08.587 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:08.587 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:08.587 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:08.587 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:08.587 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:08.587 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:08.587 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:08.587 00:09:08.587 06:34:19 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:08.587 06:34:19 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' -i 0 00:09:08.849 ===================================================== 00:09:08.849 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:08.849 ===================================================== 00:09:08.849 Controller Capabilities/Features 00:09:08.849 ================================ 00:09:08.849 Vendor ID: 1b36 00:09:08.849 Subsystem Vendor ID: 1af4 00:09:08.849 Serial Number: 12343 00:09:08.849 Model Number: QEMU NVMe Ctrl 00:09:08.849 Firmware Version: 8.0.0 00:09:08.849 Recommended Arb Burst: 6 00:09:08.849 IEEE OUI Identifier: 00 54 52 00:09:08.849 Multi-path I/O 00:09:08.849 May have multiple subsystem ports: No 00:09:08.849 May have multiple controllers: Yes 00:09:08.849 Associated with SR-IOV VF: No 00:09:08.849 Max Data Transfer Size: 524288 00:09:08.849 Max Number of Namespaces: 256 00:09:08.849 Max Number of I/O Queues: 64 00:09:08.849 NVMe Specification Version (VS): 1.4 00:09:08.849 NVMe Specification Version (Identify): 1.4 00:09:08.849 Maximum Queue Entries: 2048 00:09:08.849 Contiguous Queues Required: Yes 00:09:08.849 Arbitration Mechanisms Supported 00:09:08.849 Weighted Round Robin: Not Supported 00:09:08.849 Vendor Specific: Not Supported 00:09:08.849 Reset Timeout: 7500 ms 00:09:08.849 Doorbell Stride: 4 bytes 00:09:08.849 NVM Subsystem Reset: Not Supported 00:09:08.849 Command Sets Supported 00:09:08.849 NVM Command Set: Supported 00:09:08.849 Boot Partition: Not Supported 00:09:08.849 Memory Page Size Minimum: 4096 bytes 00:09:08.849 Memory Page Size Maximum: 65536 bytes 00:09:08.849 Persistent Memory Region: Not Supported 00:09:08.849 Optional Asynchronous Events Supported 00:09:08.849 Namespace Attribute Notices: Supported 00:09:08.849 Firmware Activation Notices: Not Supported 00:09:08.849 ANA Change Notices: Not Supported 00:09:08.849 PLE Aggregate Log Change Notices: Not Supported 00:09:08.849 LBA Status Info Alert Notices: Not Supported 00:09:08.849 EGE Aggregate Log Change Notices: Not Supported 00:09:08.849 Normal NVM Subsystem Shutdown event: Not Supported 00:09:08.849 Zone Descriptor Change Notices: Not Supported 00:09:08.849 Discovery Log Change Notices: Not Supported 00:09:08.849 Controller Attributes 00:09:08.849 128-bit Host Identifier: Not Supported 00:09:08.849 Non-Operational Permissive Mode: Not Supported 00:09:08.849 NVM Sets: Not Supported 00:09:08.849 Read Recovery Levels: Not Supported 00:09:08.849 Endurance Groups: Supported 00:09:08.849 Predictable Latency Mode: Not Supported 00:09:08.849 Traffic Based Keep ALive: Not Supported 00:09:08.849 Namespace Granularity: Not Supported 00:09:08.849 SQ Associations: Not Supported 00:09:08.849 UUID List: Not Supported 00:09:08.849 Multi-Domain Subsystem: Not Supported 00:09:08.849 Fixed Capacity Management: Not Supported 00:09:08.849 Variable Capacity Management: Not Supported 00:09:08.849 Delete Endurance Group: Not Supported 00:09:08.849 Delete NVM Set: Not Supported 00:09:08.849 Extended LBA Formats Supported: Supported 00:09:08.849 Flexible Data Placement Supported: Supported 00:09:08.849 00:09:08.849 Controller Memory Buffer Support 00:09:08.849 ================================ 00:09:08.849 Supported: No 00:09:08.849 00:09:08.849 Persistent Memory Region Support 00:09:08.849 ================================ 00:09:08.849 Supported: No 00:09:08.849 00:09:08.849 Admin Command Set Attributes 00:09:08.849 ============================ 00:09:08.849 Security Send/Receive: Not Supported 00:09:08.849 Format NVM: Supported 00:09:08.849 Firmware Activate/Download: Not Supported 00:09:08.849 Namespace Management: Supported 00:09:08.849 Device Self-Test: Not Supported 00:09:08.849 Directives: Supported 00:09:08.849 NVMe-MI: Not Supported 00:09:08.849 Virtualization Management: Not Supported 00:09:08.849 Doorbell Buffer Config: Supported 00:09:08.849 Get LBA Status Capability: Not Supported 00:09:08.849 Command & Feature Lockdown Capability: Not Supported 00:09:08.849 Abort Command Limit: 4 00:09:08.849 Async Event Request Limit: 4 00:09:08.849 Number of Firmware Slots: N/A 00:09:08.849 Firmware Slot 1 Read-Only: N/A 00:09:08.849 Firmware Activation Without Reset: N/A 00:09:08.849 Multiple Update Detection Support: N/A 00:09:08.849 Firmware Update Granularity: No Information Provided 00:09:08.849 Per-Namespace SMART Log: Yes 00:09:08.849 Asymmetric Namespace Access Log Page: Not Supported 00:09:08.849 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:08.850 Command Effects Log Page: Supported 00:09:08.850 Get Log Page Extended Data: Supported 00:09:08.850 Telemetry Log Pages: Not Supported 00:09:08.850 Persistent Event Log Pages: Not Supported 00:09:08.850 Supported Log Pages Log Page: May Support 00:09:08.850 Commands Supported & Effects Log Page: Not Supported 00:09:08.850 Feature Identifiers & Effects Log Page:May Support 00:09:08.850 NVMe-MI Commands & Effects Log Page: May Support 00:09:08.850 Data Area 4 for Telemetry Log: Not Supported 00:09:08.850 Error Log Page Entries Supported: 1 00:09:08.850 Keep Alive: Not Supported 00:09:08.850 00:09:08.850 NVM Command Set Attributes 00:09:08.850 ========================== 00:09:08.850 Submission Queue Entry Size 00:09:08.850 Max: 64 00:09:08.850 Min: 64 00:09:08.850 Completion Queue Entry Size 00:09:08.850 Max: 16 00:09:08.850 Min: 16 00:09:08.850 Number of Namespaces: 256 00:09:08.850 Compare Command: Supported 00:09:08.850 Write Uncorrectable Command: Not Supported 00:09:08.850 Dataset Management Command: Supported 00:09:08.850 Write Zeroes Command: Supported 00:09:08.850 Set Features Save Field: Supported 00:09:08.850 Reservations: Not Supported 00:09:08.850 Timestamp: Supported 00:09:08.850 Copy: Supported 00:09:08.850 Volatile Write Cache: Present 00:09:08.850 Atomic Write Unit (Normal): 1 00:09:08.850 Atomic Write Unit (PFail): 1 00:09:08.850 Atomic Compare & Write Unit: 1 00:09:08.850 Fused Compare & Write: Not Supported 00:09:08.850 Scatter-Gather List 00:09:08.850 SGL Command Set: Supported 00:09:08.850 SGL Keyed: Not Supported 00:09:08.850 SGL Bit Bucket Descriptor: Not Supported 00:09:08.850 SGL Metadata Pointer: Not Supported 00:09:08.850 Oversized SGL: Not Supported 00:09:08.850 SGL Metadata Address: Not Supported 00:09:08.850 SGL Offset: Not Supported 00:09:08.850 Transport SGL Data Block: Not Supported 00:09:08.850 Replay Protected Memory Block: Not Supported 00:09:08.850 00:09:08.850 Firmware Slot Information 00:09:08.850 ========================= 00:09:08.850 Active slot: 1 00:09:08.850 Slot 1 Firmware Revision: 1.0 00:09:08.850 00:09:08.850 00:09:08.850 Commands Supported and Effects 00:09:08.850 ============================== 00:09:08.850 Admin Commands 00:09:08.850 -------------- 00:09:08.850 Delete I/O Submission Queue (00h): Supported 00:09:08.850 Create I/O Submission Queue (01h): Supported 00:09:08.850 Get Log Page (02h): Supported 00:09:08.850 Delete I/O Completion Queue (04h): Supported 00:09:08.850 Create I/O Completion Queue (05h): Supported 00:09:08.850 Identify (06h): Supported 00:09:08.850 Abort (08h): Supported 00:09:08.850 Set Features (09h): Supported 00:09:08.850 Get Features (0Ah): Supported 00:09:08.850 Asynchronous Event Request (0Ch): Supported 00:09:08.850 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:08.850 Directive Send (19h): Supported 00:09:08.850 Directive Receive (1Ah): Supported 00:09:08.850 Virtualization Management (1Ch): Supported 00:09:08.850 Doorbell Buffer Config (7Ch): Supported 00:09:08.850 Format NVM (80h): Supported LBA-Change 00:09:08.850 I/O Commands 00:09:08.850 ------------ 00:09:08.850 Flush (00h): Supported LBA-Change 00:09:08.850 Write (01h): Supported LBA-Change 00:09:08.850 Read (02h): Supported 00:09:08.850 Compare (05h): Supported 00:09:08.850 Write Zeroes (08h): Supported LBA-Change 00:09:08.850 Dataset Management (09h): Supported LBA-Change 00:09:08.850 Unknown (0Ch): Supported 00:09:08.850 Unknown (12h): Supported 00:09:08.850 Copy (19h): Supported LBA-Change 00:09:08.850 Unknown (1Dh): Supported LBA-Change 00:09:08.850 00:09:08.850 Error Log 00:09:08.850 ========= 00:09:08.850 00:09:08.850 Arbitration 00:09:08.850 =========== 00:09:08.850 Arbitration Burst: no limit 00:09:08.850 00:09:08.850 Power Management 00:09:08.850 ================ 00:09:08.850 Number of Power States: 1 00:09:08.850 Current Power State: Power State #0 00:09:08.850 Power State #0: 00:09:08.850 Max Power: 25.00 W 00:09:08.850 Non-Operational State: Operational 00:09:08.850 Entry Latency: 16 microseconds 00:09:08.850 Exit Latency: 4 microseconds 00:09:08.850 Relative Read Throughput: 0 00:09:08.850 Relative Read Latency: 0 00:09:08.850 Relative Write Throughput: 0 00:09:08.850 Relative Write Latency: 0 00:09:08.850 Idle Power: Not Reported 00:09:08.850 Active Power: Not Reported 00:09:08.850 Non-Operational Permissive Mode: Not Supported 00:09:08.850 00:09:08.850 Health Information 00:09:08.850 ================== 00:09:08.850 Critical Warnings: 00:09:08.850 Available Spare Space: OK 00:09:08.850 Temperature: OK 00:09:08.850 Device Reliability: OK 00:09:08.850 Read Only: No 00:09:08.850 Volatile Memory Backup: OK 00:09:08.850 Current Temperature: 323 Kelvin (50 Celsius) 00:09:08.850 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:08.850 Available Spare: 0% 00:09:08.850 Available Spare Threshold: 0% 00:09:08.850 Life Percentage Used: 0% 00:09:08.850 Data Units Read: 1360 00:09:08.850 Data Units Written: 631 00:09:08.850 Host Read Commands: 58453 00:09:08.850 Host Write Commands: 28664 00:09:08.850 Controller Busy Time: 0 minutes 00:09:08.850 Power Cycles: 0 00:09:08.850 Power On Hours: 0 hours 00:09:08.850 Unsafe Shutdowns: 0 00:09:08.850 Unrecoverable Media Errors: 0 00:09:08.850 Lifetime Error Log Entries: 0 00:09:08.850 Warning Temperature Time: 0 minutes 00:09:08.850 Critical Temperature Time: 0 minutes 00:09:08.850 00:09:08.850 Number of Queues 00:09:08.850 ================ 00:09:08.850 Number of I/O Submission Queues: 64 00:09:08.850 Number of I/O Completion Queues: 64 00:09:08.850 00:09:08.850 ZNS Specific Controller Data 00:09:08.850 ============================ 00:09:08.850 Zone Append Size Limit: 0 00:09:08.850 00:09:08.850 00:09:08.850 Active Namespaces 00:09:08.850 ================= 00:09:08.850 Namespace ID:1 00:09:08.850 Error Recovery Timeout: Unlimited 00:09:08.850 Command Set Identifier: NVM (00h) 00:09:08.850 Deallocate: Supported 00:09:08.850 Deallocated/Unwritten Error: Supported 00:09:08.850 Deallocated Read Value: All 0x00 00:09:08.850 Deallocate in Write Zeroes: Not Supported 00:09:08.850 Deallocated Guard Field: 0xFFFF 00:09:08.850 Flush: Supported 00:09:08.850 Reservation: Not Supported 00:09:08.850 Namespace Sharing Capabilities: Multiple Controllers 00:09:08.850 Size (in LBAs): 262144 (1GiB) 00:09:08.850 Capacity (in LBAs): 262144 (1GiB) 00:09:08.850 Utilization (in LBAs): 262144 (1GiB) 00:09:08.850 Thin Provisioning: Not Supported 00:09:08.850 Per-NS Atomic Units: No 00:09:08.850 Maximum Single Source Range Length: 128 00:09:08.850 Maximum Copy Length: 128 00:09:08.850 Maximum Source Range Count: 128 00:09:08.850 NGUID/EUI64 Never Reused: No 00:09:08.850 Namespace Write Protected: No 00:09:08.850 Endurance group ID: 1 00:09:08.850 Number of LBA Formats: 8 00:09:08.850 Current LBA Format: LBA Format #04 00:09:08.850 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:08.850 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:08.850 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:08.850 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:08.850 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:08.850 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:08.850 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:08.850 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:08.850 00:09:08.850 Get Feature FDP: 00:09:08.850 ================ 00:09:08.850 Enabled: Yes 00:09:08.850 FDP configuration index: 0 00:09:08.850 00:09:08.850 FDP configurations log page 00:09:08.850 =========================== 00:09:08.850 Number of FDP configurations: 1 00:09:08.850 Version: 0 00:09:08.850 Size: 112 00:09:08.850 FDP Configuration Descriptor: 0 00:09:08.850 Descriptor Size: 96 00:09:08.850 Reclaim Group Identifier format: 2 00:09:08.850 FDP Volatile Write Cache: Not Present 00:09:08.850 FDP Configuration: Valid 00:09:08.850 Vendor Specific Size: 0 00:09:08.850 Number of Reclaim Groups: 2 00:09:08.850 Number of Recalim Unit Handles: 8 00:09:08.850 Max Placement Identifiers: 128 00:09:08.850 Number of Namespaces Suppprted: 256 00:09:08.850 Reclaim unit Nominal Size: 6000000 bytes 00:09:08.851 Estimated Reclaim Unit Time Limit: Not Reported 00:09:08.851 RUH Desc #000: RUH Type: Initially Isolated 00:09:08.851 RUH Desc #001: RUH Type: Initially Isolated 00:09:08.851 RUH Desc #002: RUH Type: Initially Isolated 00:09:08.851 RUH Desc #003: RUH Type: Initially Isolated 00:09:08.851 RUH Desc #004: RUH Type: Initially Isolated 00:09:08.851 RUH Desc #005: RUH Type: Initially Isolated 00:09:08.851 RUH Desc #006: RUH Type: Initially Isolated 00:09:08.851 RUH Desc #007: RUH Type: Initially Isolated 00:09:08.851 00:09:08.851 FDP reclaim unit handle usage log page 00:09:08.851 ====================================== 00:09:08.851 Number of Reclaim Unit Handles: 8 00:09:08.851 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:08.851 RUH Usage Desc #001: RUH Attributes: Unused 00:09:08.851 RUH Usage Desc #002: RUH Attributes: Unused 00:09:08.851 RUH Usage Desc #003: RUH Attributes: Unused 00:09:08.851 RUH Usage Desc #004: RUH Attributes: Unused 00:09:08.851 RUH Usage Desc #005: RUH Attributes: Unused 00:09:08.851 RUH Usage Desc #006: RUH Attributes: Unused 00:09:08.851 RUH Usage Desc #007: RUH Attributes: Unused 00:09:08.851 00:09:08.851 FDP statistics log page 00:09:08.851 ======================= 00:09:08.851 Host bytes with metadata written: 411975680 00:09:08.851 Media bytes with metadata written: 412065792 00:09:08.851 Media bytes erased: 0 00:09:08.851 00:09:08.851 FDP events log page 00:09:08.851 =================== 00:09:08.851 Number of FDP events: 0 00:09:08.851 00:09:08.851 00:09:08.851 real 0m1.021s 00:09:08.851 user 0m0.354s 00:09:08.851 sys 0m0.454s 00:09:08.851 06:34:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:08.851 ************************************ 00:09:08.851 END TEST nvme_identify 00:09:08.851 ************************************ 00:09:08.851 06:34:19 -- common/autotest_common.sh@10 -- # set +x 00:09:08.851 06:34:19 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:08.851 06:34:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:08.851 06:34:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:08.851 06:34:19 -- common/autotest_common.sh@10 -- # set +x 00:09:08.851 ************************************ 00:09:08.851 START TEST nvme_perf 00:09:08.851 ************************************ 00:09:08.851 06:34:19 -- common/autotest_common.sh@1114 -- # nvme_perf 00:09:08.851 06:34:19 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:10.231 Initializing NVMe Controllers 00:09:10.231 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:10.231 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:10.231 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:10.231 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:10.231 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:10.231 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:10.231 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:10.231 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:10.231 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:10.231 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:10.231 Initialization complete. Launching workers. 00:09:10.231 ======================================================== 00:09:10.231 Latency(us) 00:09:10.231 Device Information : IOPS MiB/s Average min max 00:09:10.231 PCIE (0000:00:07.0) NSID 1 from core 0: 15022.23 176.04 8517.00 5530.86 33296.48 00:09:10.231 PCIE (0000:00:09.0) NSID 1 from core 0: 15022.23 176.04 8513.46 5533.36 34025.92 00:09:10.231 PCIE (0000:00:06.0) NSID 1 from core 0: 15022.23 176.04 8505.62 5249.41 34787.51 00:09:10.231 PCIE (0000:00:08.0) NSID 1 from core 0: 15022.23 176.04 8499.46 4959.97 36388.35 00:09:10.231 PCIE (0000:00:08.0) NSID 2 from core 0: 15022.23 176.04 8492.47 3909.56 38563.96 00:09:10.231 PCIE (0000:00:08.0) NSID 3 from core 0: 15149.54 177.53 8414.46 3510.43 24891.98 00:09:10.231 ======================================================== 00:09:10.231 Total : 90260.71 1057.74 8490.31 3510.43 38563.96 00:09:10.232 00:09:10.232 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:10.232 ================================================================================= 00:09:10.232 1.00000% : 5646.178us 00:09:10.232 10.00000% : 5973.858us 00:09:10.232 25.00000% : 6402.363us 00:09:10.232 50.00000% : 7108.135us 00:09:10.232 75.00000% : 10334.523us 00:09:10.232 90.00000% : 13611.323us 00:09:10.232 95.00000% : 14619.569us 00:09:10.232 98.00000% : 15627.815us 00:09:10.232 99.00000% : 16333.588us 00:09:10.232 99.50000% : 31860.578us 00:09:10.232 99.90000% : 33070.474us 00:09:10.232 99.99000% : 33272.123us 00:09:10.232 99.99900% : 33473.772us 00:09:10.232 99.99990% : 33473.772us 00:09:10.232 99.99999% : 33473.772us 00:09:10.232 00:09:10.232 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:10.232 ================================================================================= 00:09:10.232 1.00000% : 5646.178us 00:09:10.232 10.00000% : 5973.858us 00:09:10.232 25.00000% : 6402.363us 00:09:10.232 50.00000% : 7057.723us 00:09:10.232 75.00000% : 10435.348us 00:09:10.232 90.00000% : 13510.498us 00:09:10.232 95.00000% : 14619.569us 00:09:10.232 98.00000% : 15728.640us 00:09:10.232 99.00000% : 16636.062us 00:09:10.232 99.50000% : 32667.175us 00:09:10.232 99.90000% : 33877.071us 00:09:10.232 99.99000% : 34078.720us 00:09:10.232 99.99900% : 34078.720us 00:09:10.232 99.99990% : 34078.720us 00:09:10.232 99.99999% : 34078.720us 00:09:10.232 00:09:10.232 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:10.232 ================================================================================= 00:09:10.232 1.00000% : 5494.942us 00:09:10.232 10.00000% : 5873.034us 00:09:10.232 25.00000% : 6351.951us 00:09:10.232 50.00000% : 7108.135us 00:09:10.232 75.00000% : 10233.698us 00:09:10.232 90.00000% : 13510.498us 00:09:10.232 95.00000% : 14720.394us 00:09:10.232 98.00000% : 15829.465us 00:09:10.232 99.00000% : 17341.834us 00:09:10.232 99.50000% : 33272.123us 00:09:10.232 99.90000% : 34482.018us 00:09:10.232 99.99000% : 34885.317us 00:09:10.232 99.99900% : 34885.317us 00:09:10.232 99.99990% : 34885.317us 00:09:10.232 99.99999% : 34885.317us 00:09:10.232 00:09:10.232 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:10.232 ================================================================================= 00:09:10.232 1.00000% : 5646.178us 00:09:10.232 10.00000% : 5973.858us 00:09:10.232 25.00000% : 6402.363us 00:09:10.232 50.00000% : 7108.135us 00:09:10.232 75.00000% : 10132.874us 00:09:10.232 90.00000% : 13409.674us 00:09:10.232 95.00000% : 14619.569us 00:09:10.232 98.00000% : 15728.640us 00:09:10.232 99.00000% : 17140.185us 00:09:10.232 99.50000% : 34885.317us 00:09:10.232 99.90000% : 36095.212us 00:09:10.232 99.99000% : 36498.511us 00:09:10.232 99.99900% : 36498.511us 00:09:10.232 99.99990% : 36498.511us 00:09:10.232 99.99999% : 36498.511us 00:09:10.232 00:09:10.232 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:10.232 ================================================================================= 00:09:10.232 1.00000% : 5570.560us 00:09:10.232 10.00000% : 5948.652us 00:09:10.232 25.00000% : 6377.157us 00:09:10.232 50.00000% : 7108.135us 00:09:10.232 75.00000% : 9981.637us 00:09:10.232 90.00000% : 13409.674us 00:09:10.232 95.00000% : 14518.745us 00:09:10.232 98.00000% : 15829.465us 00:09:10.232 99.00000% : 17644.308us 00:09:10.232 99.50000% : 37103.458us 00:09:10.232 99.90000% : 38313.354us 00:09:10.232 99.99000% : 38716.652us 00:09:10.232 99.99900% : 38716.652us 00:09:10.232 99.99990% : 38716.652us 00:09:10.232 99.99999% : 38716.652us 00:09:10.232 00:09:10.232 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:10.232 ================================================================================= 00:09:10.232 1.00000% : 5570.560us 00:09:10.232 10.00000% : 5973.858us 00:09:10.232 25.00000% : 6402.363us 00:09:10.232 50.00000% : 7108.135us 00:09:10.232 75.00000% : 10183.286us 00:09:10.232 90.00000% : 13510.498us 00:09:10.232 95.00000% : 14619.569us 00:09:10.232 98.00000% : 15627.815us 00:09:10.232 99.00000% : 16434.412us 00:09:10.232 99.50000% : 23391.311us 00:09:10.232 99.90000% : 24601.206us 00:09:10.232 99.99000% : 24903.680us 00:09:10.232 99.99900% : 24903.680us 00:09:10.232 99.99990% : 24903.680us 00:09:10.232 99.99999% : 24903.680us 00:09:10.232 00:09:10.232 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:10.232 ============================================================================== 00:09:10.232 Range in us Cumulative IO count 00:09:10.232 5520.148 - 5545.354: 0.0265% ( 4) 00:09:10.232 5545.354 - 5570.560: 0.1457% ( 18) 00:09:10.232 5570.560 - 5595.766: 0.3906% ( 37) 00:09:10.232 5595.766 - 5620.972: 0.7415% ( 53) 00:09:10.232 5620.972 - 5646.178: 1.1057% ( 55) 00:09:10.232 5646.178 - 5671.385: 1.6022% ( 75) 00:09:10.232 5671.385 - 5696.591: 2.1915% ( 89) 00:09:10.232 5696.591 - 5721.797: 2.9198% ( 110) 00:09:10.232 5721.797 - 5747.003: 3.5355% ( 93) 00:09:10.232 5747.003 - 5772.209: 4.2174% ( 103) 00:09:10.232 5772.209 - 5797.415: 4.8265% ( 92) 00:09:10.232 5797.415 - 5822.622: 5.4754% ( 98) 00:09:10.232 5822.622 - 5847.828: 6.1374% ( 100) 00:09:10.232 5847.828 - 5873.034: 6.9121% ( 117) 00:09:10.232 5873.034 - 5898.240: 7.6602% ( 113) 00:09:10.232 5898.240 - 5923.446: 8.4746% ( 123) 00:09:10.232 5923.446 - 5948.652: 9.2956% ( 124) 00:09:10.232 5948.652 - 5973.858: 10.1033% ( 122) 00:09:10.232 5973.858 - 5999.065: 10.9640% ( 130) 00:09:10.232 5999.065 - 6024.271: 11.8776% ( 138) 00:09:10.232 6024.271 - 6049.477: 12.7317% ( 129) 00:09:10.232 6049.477 - 6074.683: 13.6123% ( 133) 00:09:10.232 6074.683 - 6099.889: 14.4928% ( 133) 00:09:10.232 6099.889 - 6125.095: 15.3999% ( 137) 00:09:10.232 6125.095 - 6150.302: 16.2672% ( 131) 00:09:10.232 6150.302 - 6175.508: 17.1610% ( 135) 00:09:10.232 6175.508 - 6200.714: 18.0747% ( 138) 00:09:10.232 6200.714 - 6225.920: 19.0215% ( 143) 00:09:10.232 6225.920 - 6251.126: 19.9285% ( 137) 00:09:10.232 6251.126 - 6276.332: 20.8488% ( 139) 00:09:10.232 6276.332 - 6301.538: 21.7492% ( 136) 00:09:10.232 6301.538 - 6326.745: 22.6761% ( 140) 00:09:10.232 6326.745 - 6351.951: 23.5434% ( 131) 00:09:10.232 6351.951 - 6377.157: 24.4637% ( 139) 00:09:10.232 6377.157 - 6402.363: 25.3310% ( 131) 00:09:10.232 6402.363 - 6427.569: 26.2315% ( 136) 00:09:10.232 6427.569 - 6452.775: 27.1517% ( 139) 00:09:10.232 6452.775 - 6503.188: 28.9857% ( 277) 00:09:10.232 6503.188 - 6553.600: 30.8197% ( 277) 00:09:10.232 6553.600 - 6604.012: 32.6668% ( 279) 00:09:10.232 6604.012 - 6654.425: 34.5140% ( 279) 00:09:10.232 6654.425 - 6704.837: 36.4076% ( 286) 00:09:10.232 6704.837 - 6755.249: 38.3276% ( 290) 00:09:10.232 6755.249 - 6805.662: 40.1218% ( 271) 00:09:10.232 6805.662 - 6856.074: 41.9756% ( 280) 00:09:10.232 6856.074 - 6906.486: 43.8758% ( 287) 00:09:10.232 6906.486 - 6956.898: 45.7627% ( 285) 00:09:10.232 6956.898 - 7007.311: 47.6496% ( 285) 00:09:10.232 7007.311 - 7057.723: 49.5432% ( 286) 00:09:10.232 7057.723 - 7108.135: 51.3837% ( 278) 00:09:10.232 7108.135 - 7158.548: 53.2839% ( 287) 00:09:10.232 7158.548 - 7208.960: 55.2172% ( 292) 00:09:10.232 7208.960 - 7259.372: 57.1107% ( 286) 00:09:10.232 7259.372 - 7309.785: 59.0704% ( 296) 00:09:10.232 7309.785 - 7360.197: 60.9176% ( 279) 00:09:10.232 7360.197 - 7410.609: 62.7847% ( 282) 00:09:10.232 7410.609 - 7461.022: 64.2677% ( 224) 00:09:10.232 7461.022 - 7511.434: 65.5787% ( 198) 00:09:10.232 7511.434 - 7561.846: 66.4327% ( 129) 00:09:10.232 7561.846 - 7612.258: 67.1676% ( 111) 00:09:10.232 7612.258 - 7662.671: 67.7701% ( 91) 00:09:10.232 7662.671 - 7713.083: 68.3263% ( 84) 00:09:10.232 7713.083 - 7763.495: 68.7434% ( 63) 00:09:10.232 7763.495 - 7813.908: 69.1009% ( 54) 00:09:10.232 7813.908 - 7864.320: 69.4187% ( 48) 00:09:10.232 7864.320 - 7914.732: 69.6306% ( 32) 00:09:10.232 7914.732 - 7965.145: 69.7762% ( 22) 00:09:10.232 7965.145 - 8015.557: 69.9219% ( 22) 00:09:10.232 8015.557 - 8065.969: 70.0808% ( 24) 00:09:10.232 8065.969 - 8116.382: 70.2264% ( 22) 00:09:10.232 8116.382 - 8166.794: 70.3919% ( 25) 00:09:10.232 8166.794 - 8217.206: 70.5310% ( 21) 00:09:10.232 8217.206 - 8267.618: 70.6700% ( 21) 00:09:10.232 8267.618 - 8318.031: 70.8024% ( 20) 00:09:10.232 8318.031 - 8368.443: 70.9216% ( 18) 00:09:10.232 8368.443 - 8418.855: 71.0342% ( 17) 00:09:10.232 8418.855 - 8469.268: 71.1202% ( 13) 00:09:10.232 8469.268 - 8519.680: 71.2063% ( 13) 00:09:10.232 8519.680 - 8570.092: 71.3056% ( 15) 00:09:10.232 8570.092 - 8620.505: 71.3718% ( 10) 00:09:10.232 8620.505 - 8670.917: 71.4447% ( 11) 00:09:10.232 8670.917 - 8721.329: 71.5109% ( 10) 00:09:10.232 8721.329 - 8771.742: 71.5837% ( 11) 00:09:10.232 8771.742 - 8822.154: 71.6433% ( 9) 00:09:10.232 8822.154 - 8872.566: 71.7095% ( 10) 00:09:10.232 8872.566 - 8922.978: 71.7492% ( 6) 00:09:10.232 8922.978 - 8973.391: 71.7889% ( 6) 00:09:10.232 8973.391 - 9023.803: 71.8154% ( 4) 00:09:10.232 9023.803 - 9074.215: 71.8618% ( 7) 00:09:10.232 9074.215 - 9124.628: 71.9015% ( 6) 00:09:10.232 9124.628 - 9175.040: 71.9412% ( 6) 00:09:10.232 9175.040 - 9225.452: 71.9809% ( 6) 00:09:10.232 9225.452 - 9275.865: 72.0273% ( 7) 00:09:10.233 9275.865 - 9326.277: 72.0670% ( 6) 00:09:10.233 9326.277 - 9376.689: 72.1332% ( 10) 00:09:10.233 9376.689 - 9427.102: 72.2127% ( 12) 00:09:10.233 9427.102 - 9477.514: 72.2987% ( 13) 00:09:10.233 9477.514 - 9527.926: 72.3716% ( 11) 00:09:10.233 9527.926 - 9578.338: 72.4378% ( 10) 00:09:10.233 9578.338 - 9628.751: 72.5437% ( 16) 00:09:10.233 9628.751 - 9679.163: 72.6761% ( 20) 00:09:10.233 9679.163 - 9729.575: 72.7887% ( 17) 00:09:10.233 9729.575 - 9779.988: 72.9409% ( 23) 00:09:10.233 9779.988 - 9830.400: 73.0800% ( 21) 00:09:10.233 9830.400 - 9880.812: 73.2389% ( 24) 00:09:10.233 9880.812 - 9931.225: 73.4110% ( 26) 00:09:10.233 9931.225 - 9981.637: 73.5898% ( 27) 00:09:10.233 9981.637 - 10032.049: 73.7619% ( 26) 00:09:10.233 10032.049 - 10082.462: 73.9341% ( 26) 00:09:10.233 10082.462 - 10132.874: 74.1194% ( 28) 00:09:10.233 10132.874 - 10183.286: 74.2982% ( 27) 00:09:10.233 10183.286 - 10233.698: 74.5167% ( 33) 00:09:10.233 10233.698 - 10284.111: 74.7683% ( 38) 00:09:10.233 10284.111 - 10334.523: 75.0199% ( 38) 00:09:10.233 10334.523 - 10384.935: 75.2847% ( 40) 00:09:10.233 10384.935 - 10435.348: 75.5363% ( 38) 00:09:10.233 10435.348 - 10485.760: 75.8011% ( 40) 00:09:10.233 10485.760 - 10536.172: 76.0527% ( 38) 00:09:10.233 10536.172 - 10586.585: 76.3175% ( 40) 00:09:10.233 10586.585 - 10636.997: 76.5691% ( 38) 00:09:10.233 10636.997 - 10687.409: 76.8869% ( 48) 00:09:10.233 10687.409 - 10737.822: 77.1716% ( 43) 00:09:10.233 10737.822 - 10788.234: 77.4563% ( 43) 00:09:10.233 10788.234 - 10838.646: 77.7079% ( 38) 00:09:10.233 10838.646 - 10889.058: 77.9860% ( 42) 00:09:10.233 10889.058 - 10939.471: 78.2574% ( 41) 00:09:10.233 10939.471 - 10989.883: 78.5289% ( 41) 00:09:10.233 10989.883 - 11040.295: 78.8268% ( 45) 00:09:10.233 11040.295 - 11090.708: 79.1115% ( 43) 00:09:10.233 11090.708 - 11141.120: 79.4028% ( 44) 00:09:10.233 11141.120 - 11191.532: 79.7007% ( 45) 00:09:10.233 11191.532 - 11241.945: 79.9788% ( 42) 00:09:10.233 11241.945 - 11292.357: 80.3032% ( 49) 00:09:10.233 11292.357 - 11342.769: 80.5747% ( 41) 00:09:10.233 11342.769 - 11393.182: 80.8660% ( 44) 00:09:10.233 11393.182 - 11443.594: 81.1176% ( 38) 00:09:10.233 11443.594 - 11494.006: 81.3493% ( 35) 00:09:10.233 11494.006 - 11544.418: 81.5810% ( 35) 00:09:10.233 11544.418 - 11594.831: 81.8061% ( 34) 00:09:10.233 11594.831 - 11645.243: 82.0776% ( 41) 00:09:10.233 11645.243 - 11695.655: 82.3292% ( 38) 00:09:10.233 11695.655 - 11746.068: 82.5543% ( 34) 00:09:10.233 11746.068 - 11796.480: 82.8324% ( 42) 00:09:10.233 11796.480 - 11846.892: 83.0773% ( 37) 00:09:10.233 11846.892 - 11897.305: 83.3024% ( 34) 00:09:10.233 11897.305 - 11947.717: 83.5143% ( 32) 00:09:10.233 11947.717 - 11998.129: 83.7593% ( 37) 00:09:10.233 11998.129 - 12048.542: 83.9910% ( 35) 00:09:10.233 12048.542 - 12098.954: 84.2029% ( 32) 00:09:10.233 12098.954 - 12149.366: 84.3949% ( 29) 00:09:10.233 12149.366 - 12199.778: 84.5935% ( 30) 00:09:10.233 12199.778 - 12250.191: 84.7325% ( 21) 00:09:10.233 12250.191 - 12300.603: 84.8782% ( 22) 00:09:10.233 12300.603 - 12351.015: 85.0437% ( 25) 00:09:10.233 12351.015 - 12401.428: 85.2158% ( 26) 00:09:10.233 12401.428 - 12451.840: 85.4211% ( 31) 00:09:10.233 12451.840 - 12502.252: 85.6065% ( 28) 00:09:10.233 12502.252 - 12552.665: 85.7852% ( 27) 00:09:10.233 12552.665 - 12603.077: 85.9838% ( 30) 00:09:10.233 12603.077 - 12653.489: 86.1560% ( 26) 00:09:10.233 12653.489 - 12703.902: 86.3149% ( 24) 00:09:10.233 12703.902 - 12754.314: 86.5135% ( 30) 00:09:10.233 12754.314 - 12804.726: 86.7188% ( 31) 00:09:10.233 12804.726 - 12855.138: 86.9041% ( 28) 00:09:10.233 12855.138 - 12905.551: 87.1160% ( 32) 00:09:10.233 12905.551 - 13006.375: 87.6059% ( 74) 00:09:10.233 13006.375 - 13107.200: 88.0628% ( 69) 00:09:10.233 13107.200 - 13208.025: 88.5461% ( 73) 00:09:10.233 13208.025 - 13308.849: 89.0162% ( 71) 00:09:10.233 13308.849 - 13409.674: 89.4333% ( 63) 00:09:10.233 13409.674 - 13510.498: 89.8371% ( 61) 00:09:10.233 13510.498 - 13611.323: 90.2542% ( 63) 00:09:10.233 13611.323 - 13712.148: 90.6780% ( 64) 00:09:10.233 13712.148 - 13812.972: 91.1348% ( 69) 00:09:10.233 13812.972 - 13913.797: 91.6777% ( 82) 00:09:10.233 13913.797 - 14014.622: 92.2140% ( 81) 00:09:10.233 14014.622 - 14115.446: 92.6708% ( 69) 00:09:10.233 14115.446 - 14216.271: 93.1078% ( 66) 00:09:10.233 14216.271 - 14317.095: 93.5580% ( 68) 00:09:10.233 14317.095 - 14417.920: 94.0678% ( 77) 00:09:10.233 14417.920 - 14518.745: 94.6107% ( 82) 00:09:10.233 14518.745 - 14619.569: 95.0742% ( 70) 00:09:10.233 14619.569 - 14720.394: 95.4979% ( 64) 00:09:10.233 14720.394 - 14821.218: 95.8819% ( 58) 00:09:10.233 14821.218 - 14922.043: 96.2460% ( 55) 00:09:10.233 14922.043 - 15022.868: 96.5969% ( 53) 00:09:10.233 15022.868 - 15123.692: 96.9478% ( 53) 00:09:10.233 15123.692 - 15224.517: 97.2458% ( 45) 00:09:10.233 15224.517 - 15325.342: 97.4974% ( 38) 00:09:10.233 15325.342 - 15426.166: 97.7291% ( 35) 00:09:10.233 15426.166 - 15526.991: 97.9145% ( 28) 00:09:10.233 15526.991 - 15627.815: 98.0866% ( 26) 00:09:10.233 15627.815 - 15728.640: 98.2521% ( 25) 00:09:10.233 15728.640 - 15829.465: 98.4243% ( 26) 00:09:10.233 15829.465 - 15930.289: 98.6030% ( 27) 00:09:10.233 15930.289 - 16031.114: 98.7619% ( 24) 00:09:10.233 16031.114 - 16131.938: 98.8678% ( 16) 00:09:10.233 16131.938 - 16232.763: 98.9738% ( 16) 00:09:10.233 16232.763 - 16333.588: 99.0599% ( 13) 00:09:10.233 16333.588 - 16434.412: 99.0996% ( 6) 00:09:10.233 16434.412 - 16535.237: 99.1327% ( 5) 00:09:10.233 16535.237 - 16636.062: 99.1525% ( 3) 00:09:10.233 30650.683 - 30852.332: 99.1856% ( 5) 00:09:10.233 30852.332 - 31053.982: 99.2519% ( 10) 00:09:10.233 31053.982 - 31255.631: 99.3313% ( 12) 00:09:10.233 31255.631 - 31457.280: 99.4041% ( 11) 00:09:10.233 31457.280 - 31658.929: 99.4637% ( 9) 00:09:10.233 31658.929 - 31860.578: 99.5299% ( 10) 00:09:10.233 31860.578 - 32062.228: 99.5895% ( 9) 00:09:10.233 32062.228 - 32263.877: 99.6557% ( 10) 00:09:10.233 32263.877 - 32465.526: 99.7219% ( 10) 00:09:10.233 32465.526 - 32667.175: 99.7881% ( 10) 00:09:10.233 32667.175 - 32868.825: 99.8610% ( 11) 00:09:10.233 32868.825 - 33070.474: 99.9272% ( 10) 00:09:10.233 33070.474 - 33272.123: 99.9934% ( 10) 00:09:10.233 33272.123 - 33473.772: 100.0000% ( 1) 00:09:10.233 00:09:10.233 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:10.233 ============================================================================== 00:09:10.233 Range in us Cumulative IO count 00:09:10.233 5520.148 - 5545.354: 0.0265% ( 4) 00:09:10.233 5545.354 - 5570.560: 0.0794% ( 8) 00:09:10.233 5570.560 - 5595.766: 0.2847% ( 31) 00:09:10.233 5595.766 - 5620.972: 0.7283% ( 67) 00:09:10.233 5620.972 - 5646.178: 1.1520% ( 64) 00:09:10.233 5646.178 - 5671.385: 1.5956% ( 67) 00:09:10.233 5671.385 - 5696.591: 2.1584% ( 85) 00:09:10.233 5696.591 - 5721.797: 2.8337% ( 102) 00:09:10.233 5721.797 - 5747.003: 3.3965% ( 85) 00:09:10.233 5747.003 - 5772.209: 4.0784% ( 103) 00:09:10.233 5772.209 - 5797.415: 4.7338% ( 99) 00:09:10.233 5797.415 - 5822.622: 5.4290% ( 105) 00:09:10.233 5822.622 - 5847.828: 6.1308% ( 106) 00:09:10.233 5847.828 - 5873.034: 6.9055% ( 117) 00:09:10.233 5873.034 - 5898.240: 7.6205% ( 108) 00:09:10.233 5898.240 - 5923.446: 8.3885% ( 116) 00:09:10.233 5923.446 - 5948.652: 9.1698% ( 118) 00:09:10.233 5948.652 - 5973.858: 10.0900% ( 139) 00:09:10.233 5973.858 - 5999.065: 10.9044% ( 123) 00:09:10.233 5999.065 - 6024.271: 11.7850% ( 133) 00:09:10.233 6024.271 - 6049.477: 12.8112% ( 155) 00:09:10.233 6049.477 - 6074.683: 13.6653% ( 129) 00:09:10.233 6074.683 - 6099.889: 14.5789% ( 138) 00:09:10.233 6099.889 - 6125.095: 15.4595% ( 133) 00:09:10.233 6125.095 - 6150.302: 16.2937% ( 126) 00:09:10.233 6150.302 - 6175.508: 17.1544% ( 130) 00:09:10.233 6175.508 - 6200.714: 18.1012% ( 143) 00:09:10.233 6200.714 - 6225.920: 18.9817% ( 133) 00:09:10.233 6225.920 - 6251.126: 19.9153% ( 141) 00:09:10.233 6251.126 - 6276.332: 20.8289% ( 138) 00:09:10.233 6276.332 - 6301.538: 21.7360% ( 137) 00:09:10.233 6301.538 - 6326.745: 22.6629% ( 140) 00:09:10.233 6326.745 - 6351.951: 23.5832% ( 139) 00:09:10.233 6351.951 - 6377.157: 24.4571% ( 132) 00:09:10.233 6377.157 - 6402.363: 25.3972% ( 142) 00:09:10.233 6402.363 - 6427.569: 26.3374% ( 142) 00:09:10.233 6427.569 - 6452.775: 27.2643% ( 140) 00:09:10.233 6452.775 - 6503.188: 29.0850% ( 275) 00:09:10.233 6503.188 - 6553.600: 30.9852% ( 287) 00:09:10.233 6553.600 - 6604.012: 32.8390% ( 280) 00:09:10.233 6604.012 - 6654.425: 34.8120% ( 298) 00:09:10.233 6654.425 - 6704.837: 36.6790% ( 282) 00:09:10.233 6704.837 - 6755.249: 38.6520% ( 298) 00:09:10.233 6755.249 - 6805.662: 40.5456% ( 286) 00:09:10.233 6805.662 - 6856.074: 42.5252% ( 299) 00:09:10.233 6856.074 - 6906.486: 44.5114% ( 300) 00:09:10.233 6906.486 - 6956.898: 46.4314% ( 290) 00:09:10.233 6956.898 - 7007.311: 48.3514% ( 290) 00:09:10.233 7007.311 - 7057.723: 50.3112% ( 296) 00:09:10.233 7057.723 - 7108.135: 52.2113% ( 287) 00:09:10.233 7108.135 - 7158.548: 54.1247% ( 289) 00:09:10.233 7158.548 - 7208.960: 56.0779% ( 295) 00:09:10.233 7208.960 - 7259.372: 57.9846% ( 288) 00:09:10.233 7259.372 - 7309.785: 59.8914% ( 288) 00:09:10.233 7309.785 - 7360.197: 61.8048% ( 289) 00:09:10.233 7360.197 - 7410.609: 63.6454% ( 278) 00:09:10.233 7410.609 - 7461.022: 65.1682% ( 230) 00:09:10.233 7461.022 - 7511.434: 66.4526% ( 194) 00:09:10.233 7511.434 - 7561.846: 67.3530% ( 136) 00:09:10.233 7561.846 - 7612.258: 68.0482% ( 105) 00:09:10.233 7612.258 - 7662.671: 68.6441% ( 90) 00:09:10.233 7662.671 - 7713.083: 69.1870% ( 82) 00:09:10.234 7713.083 - 7763.495: 69.6173% ( 65) 00:09:10.234 7763.495 - 7813.908: 69.9881% ( 56) 00:09:10.234 7813.908 - 7864.320: 70.2794% ( 44) 00:09:10.234 7864.320 - 7914.732: 70.4780% ( 30) 00:09:10.234 7914.732 - 7965.145: 70.6700% ( 29) 00:09:10.234 7965.145 - 8015.557: 70.8422% ( 26) 00:09:10.234 8015.557 - 8065.969: 71.0143% ( 26) 00:09:10.234 8065.969 - 8116.382: 71.1732% ( 24) 00:09:10.234 8116.382 - 8166.794: 71.3189% ( 22) 00:09:10.234 8166.794 - 8217.206: 71.4447% ( 19) 00:09:10.234 8217.206 - 8267.618: 71.5506% ( 16) 00:09:10.234 8267.618 - 8318.031: 71.6698% ( 18) 00:09:10.234 8318.031 - 8368.443: 71.7558% ( 13) 00:09:10.234 8368.443 - 8418.855: 71.8287% ( 11) 00:09:10.234 8418.855 - 8469.268: 71.8816% ( 8) 00:09:10.234 8469.268 - 8519.680: 71.9213% ( 6) 00:09:10.234 8519.680 - 8570.092: 71.9743% ( 8) 00:09:10.234 8570.092 - 8620.505: 72.0008% ( 4) 00:09:10.234 8620.505 - 8670.917: 72.0207% ( 3) 00:09:10.234 8670.917 - 8721.329: 72.0339% ( 2) 00:09:10.234 9225.452 - 9275.865: 72.0869% ( 8) 00:09:10.234 9275.865 - 9326.277: 72.1133% ( 4) 00:09:10.234 9326.277 - 9376.689: 72.1729% ( 9) 00:09:10.234 9376.689 - 9427.102: 72.2590% ( 13) 00:09:10.234 9427.102 - 9477.514: 72.3385% ( 12) 00:09:10.234 9477.514 - 9527.926: 72.4311% ( 14) 00:09:10.234 9527.926 - 9578.338: 72.5503% ( 18) 00:09:10.234 9578.338 - 9628.751: 72.6695% ( 18) 00:09:10.234 9628.751 - 9679.163: 72.7622% ( 14) 00:09:10.234 9679.163 - 9729.575: 72.8549% ( 14) 00:09:10.234 9729.575 - 9779.988: 72.9740% ( 18) 00:09:10.234 9779.988 - 9830.400: 73.0998% ( 19) 00:09:10.234 9830.400 - 9880.812: 73.2190% ( 18) 00:09:10.234 9880.812 - 9931.225: 73.3448% ( 19) 00:09:10.234 9931.225 - 9981.637: 73.4772% ( 20) 00:09:10.234 9981.637 - 10032.049: 73.6030% ( 19) 00:09:10.234 10032.049 - 10082.462: 73.7354% ( 20) 00:09:10.234 10082.462 - 10132.874: 73.8745% ( 21) 00:09:10.234 10132.874 - 10183.286: 74.0334% ( 24) 00:09:10.234 10183.286 - 10233.698: 74.1790% ( 22) 00:09:10.234 10233.698 - 10284.111: 74.3512% ( 26) 00:09:10.234 10284.111 - 10334.523: 74.5498% ( 30) 00:09:10.234 10334.523 - 10384.935: 74.7815% ( 35) 00:09:10.234 10384.935 - 10435.348: 75.0132% ( 35) 00:09:10.234 10435.348 - 10485.760: 75.2450% ( 35) 00:09:10.234 10485.760 - 10536.172: 75.4899% ( 37) 00:09:10.234 10536.172 - 10586.585: 75.7415% ( 38) 00:09:10.234 10586.585 - 10636.997: 75.9865% ( 37) 00:09:10.234 10636.997 - 10687.409: 76.2646% ( 42) 00:09:10.234 10687.409 - 10737.822: 76.5426% ( 42) 00:09:10.234 10737.822 - 10788.234: 76.8207% ( 42) 00:09:10.234 10788.234 - 10838.646: 77.1186% ( 45) 00:09:10.234 10838.646 - 10889.058: 77.4100% ( 44) 00:09:10.234 10889.058 - 10939.471: 77.7079% ( 45) 00:09:10.234 10939.471 - 10989.883: 77.9992% ( 44) 00:09:10.234 10989.883 - 11040.295: 78.3236% ( 49) 00:09:10.234 11040.295 - 11090.708: 78.6216% ( 45) 00:09:10.234 11090.708 - 11141.120: 78.9394% ( 48) 00:09:10.234 11141.120 - 11191.532: 79.2704% ( 50) 00:09:10.234 11191.532 - 11241.945: 79.5882% ( 48) 00:09:10.234 11241.945 - 11292.357: 79.8861% ( 45) 00:09:10.234 11292.357 - 11342.769: 80.1708% ( 43) 00:09:10.234 11342.769 - 11393.182: 80.4224% ( 38) 00:09:10.234 11393.182 - 11443.594: 80.6740% ( 38) 00:09:10.234 11443.594 - 11494.006: 80.8925% ( 33) 00:09:10.234 11494.006 - 11544.418: 81.0977% ( 31) 00:09:10.234 11544.418 - 11594.831: 81.3228% ( 34) 00:09:10.234 11594.831 - 11645.243: 81.5479% ( 34) 00:09:10.234 11645.243 - 11695.655: 81.7995% ( 38) 00:09:10.234 11695.655 - 11746.068: 82.0445% ( 37) 00:09:10.234 11746.068 - 11796.480: 82.3159% ( 41) 00:09:10.234 11796.480 - 11846.892: 82.5742% ( 39) 00:09:10.234 11846.892 - 11897.305: 82.7993% ( 34) 00:09:10.234 11897.305 - 11947.717: 83.0773% ( 42) 00:09:10.234 11947.717 - 11998.129: 83.3024% ( 34) 00:09:10.234 11998.129 - 12048.542: 83.5474% ( 37) 00:09:10.234 12048.542 - 12098.954: 83.7858% ( 36) 00:09:10.234 12098.954 - 12149.366: 84.0440% ( 39) 00:09:10.234 12149.366 - 12199.778: 84.2956% ( 38) 00:09:10.234 12199.778 - 12250.191: 84.5140% ( 33) 00:09:10.234 12250.191 - 12300.603: 84.7391% ( 34) 00:09:10.234 12300.603 - 12351.015: 84.9709% ( 35) 00:09:10.234 12351.015 - 12401.428: 85.2158% ( 37) 00:09:10.234 12401.428 - 12451.840: 85.4145% ( 30) 00:09:10.234 12451.840 - 12502.252: 85.6263% ( 32) 00:09:10.234 12502.252 - 12552.665: 85.8183% ( 29) 00:09:10.234 12552.665 - 12603.077: 86.0236% ( 31) 00:09:10.234 12603.077 - 12653.489: 86.2222% ( 30) 00:09:10.234 12653.489 - 12703.902: 86.4274% ( 31) 00:09:10.234 12703.902 - 12754.314: 86.5996% ( 26) 00:09:10.234 12754.314 - 12804.726: 86.8048% ( 31) 00:09:10.234 12804.726 - 12855.138: 87.0233% ( 33) 00:09:10.234 12855.138 - 12905.551: 87.2484% ( 34) 00:09:10.234 12905.551 - 13006.375: 87.7383% ( 74) 00:09:10.234 13006.375 - 13107.200: 88.2415% ( 76) 00:09:10.234 13107.200 - 13208.025: 88.7050% ( 70) 00:09:10.234 13208.025 - 13308.849: 89.1618% ( 69) 00:09:10.234 13308.849 - 13409.674: 89.6319% ( 71) 00:09:10.234 13409.674 - 13510.498: 90.0821% ( 68) 00:09:10.234 13510.498 - 13611.323: 90.5522% ( 71) 00:09:10.234 13611.323 - 13712.148: 91.0289% ( 72) 00:09:10.234 13712.148 - 13812.972: 91.5056% ( 72) 00:09:10.234 13812.972 - 13913.797: 91.9823% ( 72) 00:09:10.234 13913.797 - 14014.622: 92.4854% ( 76) 00:09:10.234 14014.622 - 14115.446: 92.9754% ( 74) 00:09:10.234 14115.446 - 14216.271: 93.4322% ( 69) 00:09:10.234 14216.271 - 14317.095: 93.8692% ( 66) 00:09:10.234 14317.095 - 14417.920: 94.3061% ( 66) 00:09:10.234 14417.920 - 14518.745: 94.8159% ( 77) 00:09:10.234 14518.745 - 14619.569: 95.2728% ( 69) 00:09:10.234 14619.569 - 14720.394: 95.6634% ( 59) 00:09:10.234 14720.394 - 14821.218: 96.0143% ( 53) 00:09:10.234 14821.218 - 14922.043: 96.3652% ( 53) 00:09:10.234 14922.043 - 15022.868: 96.6830% ( 48) 00:09:10.234 15022.868 - 15123.692: 96.9942% ( 47) 00:09:10.234 15123.692 - 15224.517: 97.2590% ( 40) 00:09:10.234 15224.517 - 15325.342: 97.4775% ( 33) 00:09:10.234 15325.342 - 15426.166: 97.6827% ( 31) 00:09:10.234 15426.166 - 15526.991: 97.8284% ( 22) 00:09:10.234 15526.991 - 15627.815: 97.9674% ( 21) 00:09:10.234 15627.815 - 15728.640: 98.0998% ( 20) 00:09:10.234 15728.640 - 15829.465: 98.2190% ( 18) 00:09:10.234 15829.465 - 15930.289: 98.3514% ( 20) 00:09:10.234 15930.289 - 16031.114: 98.4905% ( 21) 00:09:10.234 16031.114 - 16131.938: 98.6229% ( 20) 00:09:10.234 16131.938 - 16232.763: 98.7288% ( 16) 00:09:10.234 16232.763 - 16333.588: 98.8281% ( 15) 00:09:10.234 16333.588 - 16434.412: 98.9142% ( 13) 00:09:10.234 16434.412 - 16535.237: 98.9672% ( 8) 00:09:10.234 16535.237 - 16636.062: 99.0003% ( 5) 00:09:10.234 16636.062 - 16736.886: 99.0201% ( 3) 00:09:10.234 16736.886 - 16837.711: 99.0532% ( 5) 00:09:10.234 16837.711 - 16938.535: 99.0863% ( 5) 00:09:10.234 16938.535 - 17039.360: 99.1128% ( 4) 00:09:10.234 17039.360 - 17140.185: 99.1393% ( 4) 00:09:10.234 17140.185 - 17241.009: 99.1525% ( 2) 00:09:10.234 31255.631 - 31457.280: 99.1658% ( 2) 00:09:10.234 31457.280 - 31658.929: 99.2320% ( 10) 00:09:10.234 31658.929 - 31860.578: 99.2916% ( 9) 00:09:10.234 31860.578 - 32062.228: 99.3578% ( 10) 00:09:10.234 32062.228 - 32263.877: 99.4174% ( 9) 00:09:10.234 32263.877 - 32465.526: 99.4902% ( 11) 00:09:10.234 32465.526 - 32667.175: 99.5564% ( 10) 00:09:10.234 32667.175 - 32868.825: 99.6226% ( 10) 00:09:10.234 32868.825 - 33070.474: 99.6822% ( 9) 00:09:10.234 33070.474 - 33272.123: 99.7484% ( 10) 00:09:10.234 33272.123 - 33473.772: 99.8146% ( 10) 00:09:10.234 33473.772 - 33675.422: 99.8808% ( 10) 00:09:10.234 33675.422 - 33877.071: 99.9470% ( 10) 00:09:10.234 33877.071 - 34078.720: 100.0000% ( 8) 00:09:10.234 00:09:10.234 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:10.234 ============================================================================== 00:09:10.234 Range in us Cumulative IO count 00:09:10.234 5242.880 - 5268.086: 0.0132% ( 2) 00:09:10.234 5268.086 - 5293.292: 0.0331% ( 3) 00:09:10.234 5293.292 - 5318.498: 0.0397% ( 1) 00:09:10.234 5318.498 - 5343.705: 0.0530% ( 2) 00:09:10.234 5343.705 - 5368.911: 0.0662% ( 2) 00:09:10.234 5368.911 - 5394.117: 0.1523% ( 13) 00:09:10.234 5394.117 - 5419.323: 0.2913% ( 21) 00:09:10.234 5419.323 - 5444.529: 0.5760% ( 43) 00:09:10.234 5444.529 - 5469.735: 0.8276% ( 38) 00:09:10.234 5469.735 - 5494.942: 1.1586% ( 50) 00:09:10.234 5494.942 - 5520.148: 1.5625% ( 61) 00:09:10.234 5520.148 - 5545.354: 2.0789% ( 78) 00:09:10.234 5545.354 - 5570.560: 2.5821% ( 76) 00:09:10.234 5570.560 - 5595.766: 3.0853% ( 76) 00:09:10.234 5595.766 - 5620.972: 3.6216% ( 81) 00:09:10.234 5620.972 - 5646.178: 4.2307% ( 92) 00:09:10.234 5646.178 - 5671.385: 4.8067% ( 87) 00:09:10.234 5671.385 - 5696.591: 5.5548% ( 113) 00:09:10.234 5696.591 - 5721.797: 6.2103% ( 99) 00:09:10.234 5721.797 - 5747.003: 6.7863% ( 87) 00:09:10.234 5747.003 - 5772.209: 7.5808% ( 120) 00:09:10.234 5772.209 - 5797.415: 8.3753% ( 120) 00:09:10.234 5797.415 - 5822.622: 9.0572% ( 103) 00:09:10.234 5822.622 - 5847.828: 9.8186% ( 115) 00:09:10.234 5847.828 - 5873.034: 10.5734% ( 114) 00:09:10.234 5873.034 - 5898.240: 11.2685% ( 105) 00:09:10.234 5898.240 - 5923.446: 11.9836% ( 108) 00:09:10.234 5923.446 - 5948.652: 12.7317% ( 113) 00:09:10.234 5948.652 - 5973.858: 13.5328% ( 121) 00:09:10.234 5973.858 - 5999.065: 14.2810% ( 113) 00:09:10.234 5999.065 - 6024.271: 15.0821% ( 121) 00:09:10.234 6024.271 - 6049.477: 15.8832% ( 121) 00:09:10.234 6049.477 - 6074.683: 16.5387% ( 99) 00:09:10.234 6074.683 - 6099.889: 17.3729% ( 126) 00:09:10.235 6099.889 - 6125.095: 18.0813% ( 107) 00:09:10.235 6125.095 - 6150.302: 18.9619% ( 133) 00:09:10.235 6150.302 - 6175.508: 19.7166% ( 114) 00:09:10.235 6175.508 - 6200.714: 20.5310% ( 123) 00:09:10.235 6200.714 - 6225.920: 21.3520% ( 124) 00:09:10.235 6225.920 - 6251.126: 22.1531% ( 121) 00:09:10.235 6251.126 - 6276.332: 22.9542% ( 121) 00:09:10.235 6276.332 - 6301.538: 23.7421% ( 119) 00:09:10.235 6301.538 - 6326.745: 24.5365% ( 120) 00:09:10.235 6326.745 - 6351.951: 25.3377% ( 121) 00:09:10.235 6351.951 - 6377.157: 26.1520% ( 123) 00:09:10.235 6377.157 - 6402.363: 26.9597% ( 122) 00:09:10.235 6402.363 - 6427.569: 27.7807% ( 124) 00:09:10.235 6427.569 - 6452.775: 28.6017% ( 124) 00:09:10.235 6452.775 - 6503.188: 30.2767% ( 253) 00:09:10.235 6503.188 - 6553.600: 31.9253% ( 249) 00:09:10.235 6553.600 - 6604.012: 33.5606% ( 247) 00:09:10.235 6604.012 - 6654.425: 35.2026% ( 248) 00:09:10.235 6654.425 - 6704.837: 36.8379% ( 247) 00:09:10.235 6704.837 - 6755.249: 38.5659% ( 261) 00:09:10.235 6755.249 - 6805.662: 40.1880% ( 245) 00:09:10.235 6805.662 - 6856.074: 41.8366% ( 249) 00:09:10.235 6856.074 - 6906.486: 43.4123% ( 238) 00:09:10.235 6906.486 - 6956.898: 45.0344% ( 245) 00:09:10.235 6956.898 - 7007.311: 46.6565% ( 245) 00:09:10.235 7007.311 - 7057.723: 48.3448% ( 255) 00:09:10.235 7057.723 - 7108.135: 50.0000% ( 250) 00:09:10.235 7108.135 - 7158.548: 51.6155% ( 244) 00:09:10.235 7158.548 - 7208.960: 53.2574% ( 248) 00:09:10.235 7208.960 - 7259.372: 54.8861% ( 246) 00:09:10.235 7259.372 - 7309.785: 56.5148% ( 246) 00:09:10.235 7309.785 - 7360.197: 58.1634% ( 249) 00:09:10.235 7360.197 - 7410.609: 59.8186% ( 250) 00:09:10.235 7410.609 - 7461.022: 61.4407% ( 245) 00:09:10.235 7461.022 - 7511.434: 63.1356% ( 256) 00:09:10.235 7511.434 - 7561.846: 64.5988% ( 221) 00:09:10.235 7561.846 - 7612.258: 65.9428% ( 203) 00:09:10.235 7612.258 - 7662.671: 67.0087% ( 161) 00:09:10.235 7662.671 - 7713.083: 67.8496% ( 127) 00:09:10.235 7713.083 - 7763.495: 68.5117% ( 100) 00:09:10.235 7763.495 - 7813.908: 69.0810% ( 86) 00:09:10.235 7813.908 - 7864.320: 69.5180% ( 66) 00:09:10.235 7864.320 - 7914.732: 69.9020% ( 58) 00:09:10.235 7914.732 - 7965.145: 70.1602% ( 39) 00:09:10.235 7965.145 - 8015.557: 70.3655% ( 31) 00:09:10.235 8015.557 - 8065.969: 70.5575% ( 29) 00:09:10.235 8065.969 - 8116.382: 70.7230% ( 25) 00:09:10.235 8116.382 - 8166.794: 70.8554% ( 20) 00:09:10.235 8166.794 - 8217.206: 71.0077% ( 23) 00:09:10.235 8217.206 - 8267.618: 71.1533% ( 22) 00:09:10.235 8267.618 - 8318.031: 71.2924% ( 21) 00:09:10.235 8318.031 - 8368.443: 71.3784% ( 13) 00:09:10.235 8368.443 - 8418.855: 71.4910% ( 17) 00:09:10.235 8418.855 - 8469.268: 71.5837% ( 14) 00:09:10.235 8469.268 - 8519.680: 71.6499% ( 10) 00:09:10.235 8519.680 - 8570.092: 71.7227% ( 11) 00:09:10.235 8570.092 - 8620.505: 71.7558% ( 5) 00:09:10.235 8620.505 - 8670.917: 71.8022% ( 7) 00:09:10.235 8670.917 - 8721.329: 71.8287% ( 4) 00:09:10.235 8721.329 - 8771.742: 71.8684% ( 6) 00:09:10.235 8771.742 - 8822.154: 71.8949% ( 4) 00:09:10.235 8822.154 - 8872.566: 71.9280% ( 5) 00:09:10.235 8872.566 - 8922.978: 71.9677% ( 6) 00:09:10.235 8922.978 - 8973.391: 71.9942% ( 4) 00:09:10.235 8973.391 - 9023.803: 72.0405% ( 7) 00:09:10.235 9074.215 - 9124.628: 72.0670% ( 4) 00:09:10.235 9124.628 - 9175.040: 72.1067% ( 6) 00:09:10.235 9175.040 - 9225.452: 72.2789% ( 26) 00:09:10.235 9225.452 - 9275.865: 72.3451% ( 10) 00:09:10.235 9275.865 - 9326.277: 72.4709% ( 19) 00:09:10.235 9326.277 - 9376.689: 72.5503% ( 12) 00:09:10.235 9376.689 - 9427.102: 72.6827% ( 20) 00:09:10.235 9427.102 - 9477.514: 72.8151% ( 20) 00:09:10.235 9477.514 - 9527.926: 72.9807% ( 25) 00:09:10.235 9527.926 - 9578.338: 73.1197% ( 21) 00:09:10.235 9578.338 - 9628.751: 73.2455% ( 19) 00:09:10.235 9628.751 - 9679.163: 73.3713% ( 19) 00:09:10.235 9679.163 - 9729.575: 73.4905% ( 18) 00:09:10.235 9729.575 - 9779.988: 73.6295% ( 21) 00:09:10.235 9779.988 - 9830.400: 73.7421% ( 17) 00:09:10.235 9830.400 - 9880.812: 73.8414% ( 15) 00:09:10.235 9880.812 - 9931.225: 74.0201% ( 27) 00:09:10.235 9931.225 - 9981.637: 74.1393% ( 18) 00:09:10.235 9981.637 - 10032.049: 74.3247% ( 28) 00:09:10.235 10032.049 - 10082.462: 74.5101% ( 28) 00:09:10.235 10082.462 - 10132.874: 74.7021% ( 29) 00:09:10.235 10132.874 - 10183.286: 74.9603% ( 39) 00:09:10.235 10183.286 - 10233.698: 75.1324% ( 26) 00:09:10.235 10233.698 - 10284.111: 75.3046% ( 26) 00:09:10.235 10284.111 - 10334.523: 75.5098% ( 31) 00:09:10.235 10334.523 - 10384.935: 75.7415% ( 35) 00:09:10.235 10384.935 - 10435.348: 75.8739% ( 20) 00:09:10.235 10435.348 - 10485.760: 76.1123% ( 36) 00:09:10.235 10485.760 - 10536.172: 76.3440% ( 35) 00:09:10.235 10536.172 - 10586.585: 76.6155% ( 41) 00:09:10.235 10586.585 - 10636.997: 76.8075% ( 29) 00:09:10.235 10636.997 - 10687.409: 77.0591% ( 38) 00:09:10.235 10687.409 - 10737.822: 77.2709% ( 32) 00:09:10.235 10737.822 - 10788.234: 77.5159% ( 37) 00:09:10.235 10788.234 - 10838.646: 77.7145% ( 30) 00:09:10.235 10838.646 - 10889.058: 77.9198% ( 31) 00:09:10.235 10889.058 - 10939.471: 78.1713% ( 38) 00:09:10.235 10939.471 - 10989.883: 78.4560% ( 43) 00:09:10.235 10989.883 - 11040.295: 78.7142% ( 39) 00:09:10.235 11040.295 - 11090.708: 79.0056% ( 44) 00:09:10.235 11090.708 - 11141.120: 79.1976% ( 29) 00:09:10.235 11141.120 - 11191.532: 79.4823% ( 43) 00:09:10.235 11191.532 - 11241.945: 79.6941% ( 32) 00:09:10.235 11241.945 - 11292.357: 79.9788% ( 43) 00:09:10.235 11292.357 - 11342.769: 80.2370% ( 39) 00:09:10.235 11342.769 - 11393.182: 80.4952% ( 39) 00:09:10.235 11393.182 - 11443.594: 80.7998% ( 46) 00:09:10.235 11443.594 - 11494.006: 81.0845% ( 43) 00:09:10.235 11494.006 - 11544.418: 81.3559% ( 41) 00:09:10.235 11544.418 - 11594.831: 81.7333% ( 57) 00:09:10.235 11594.831 - 11645.243: 81.9849% ( 38) 00:09:10.235 11645.243 - 11695.655: 82.2100% ( 34) 00:09:10.235 11695.655 - 11746.068: 82.4484% ( 36) 00:09:10.235 11746.068 - 11796.480: 82.6668% ( 33) 00:09:10.235 11796.480 - 11846.892: 82.8787% ( 32) 00:09:10.235 11846.892 - 11897.305: 83.0707% ( 29) 00:09:10.235 11897.305 - 11947.717: 83.2958% ( 34) 00:09:10.235 11947.717 - 11998.129: 83.5275% ( 35) 00:09:10.235 11998.129 - 12048.542: 83.7593% ( 35) 00:09:10.235 12048.542 - 12098.954: 84.0837% ( 49) 00:09:10.235 12098.954 - 12149.366: 84.3154% ( 35) 00:09:10.235 12149.366 - 12199.778: 84.5670% ( 38) 00:09:10.235 12199.778 - 12250.191: 84.7987% ( 35) 00:09:10.235 12250.191 - 12300.603: 85.0768% ( 42) 00:09:10.235 12300.603 - 12351.015: 85.3218% ( 37) 00:09:10.235 12351.015 - 12401.428: 85.5336% ( 32) 00:09:10.235 12401.428 - 12451.840: 85.7918% ( 39) 00:09:10.235 12451.840 - 12502.252: 85.9176% ( 19) 00:09:10.235 12502.252 - 12552.665: 86.1427% ( 34) 00:09:10.235 12552.665 - 12603.077: 86.4274% ( 43) 00:09:10.235 12603.077 - 12653.489: 86.5930% ( 25) 00:09:10.235 12653.489 - 12703.902: 86.7982% ( 31) 00:09:10.235 12703.902 - 12754.314: 86.9571% ( 24) 00:09:10.235 12754.314 - 12804.726: 87.2153% ( 39) 00:09:10.235 12804.726 - 12855.138: 87.4139% ( 30) 00:09:10.235 12855.138 - 12905.551: 87.6523% ( 36) 00:09:10.235 12905.551 - 13006.375: 88.0561% ( 61) 00:09:10.235 13006.375 - 13107.200: 88.5262% ( 71) 00:09:10.235 13107.200 - 13208.025: 89.0691% ( 82) 00:09:10.235 13208.025 - 13308.849: 89.5061% ( 66) 00:09:10.235 13308.849 - 13409.674: 89.9033% ( 60) 00:09:10.235 13409.674 - 13510.498: 90.3800% ( 72) 00:09:10.235 13510.498 - 13611.323: 90.7376% ( 54) 00:09:10.235 13611.323 - 13712.148: 91.2076% ( 71) 00:09:10.235 13712.148 - 13812.972: 91.6314% ( 64) 00:09:10.235 13812.972 - 13913.797: 92.0485% ( 63) 00:09:10.235 13913.797 - 14014.622: 92.4987% ( 68) 00:09:10.235 14014.622 - 14115.446: 92.9952% ( 75) 00:09:10.235 14115.446 - 14216.271: 93.4123% ( 63) 00:09:10.235 14216.271 - 14317.095: 93.7500% ( 51) 00:09:10.235 14317.095 - 14417.920: 94.0215% ( 41) 00:09:10.235 14417.920 - 14518.745: 94.5114% ( 74) 00:09:10.235 14518.745 - 14619.569: 94.7762% ( 40) 00:09:10.235 14619.569 - 14720.394: 95.1668% ( 59) 00:09:10.235 14720.394 - 14821.218: 95.4648% ( 45) 00:09:10.235 14821.218 - 14922.043: 96.0011% ( 81) 00:09:10.235 14922.043 - 15022.868: 96.1931% ( 29) 00:09:10.235 15022.868 - 15123.692: 96.4513% ( 39) 00:09:10.235 15123.692 - 15224.517: 96.6962% ( 37) 00:09:10.235 15224.517 - 15325.342: 96.9809% ( 43) 00:09:10.235 15325.342 - 15426.166: 97.2391% ( 39) 00:09:10.235 15426.166 - 15526.991: 97.4444% ( 31) 00:09:10.235 15526.991 - 15627.815: 97.6894% ( 37) 00:09:10.235 15627.815 - 15728.640: 97.8946% ( 31) 00:09:10.235 15728.640 - 15829.465: 98.0866% ( 29) 00:09:10.235 15829.465 - 15930.289: 98.2323% ( 22) 00:09:10.235 15930.289 - 16031.114: 98.3249% ( 14) 00:09:10.235 16031.114 - 16131.938: 98.4110% ( 13) 00:09:10.235 16131.938 - 16232.763: 98.4971% ( 13) 00:09:10.235 16232.763 - 16333.588: 98.5832% ( 13) 00:09:10.235 16333.588 - 16434.412: 98.6626% ( 12) 00:09:10.235 16434.412 - 16535.237: 98.7354% ( 11) 00:09:10.235 16535.237 - 16636.062: 98.8016% ( 10) 00:09:10.235 16636.062 - 16736.886: 98.8414% ( 6) 00:09:10.235 16736.886 - 16837.711: 98.8678% ( 4) 00:09:10.235 16837.711 - 16938.535: 98.9076% ( 6) 00:09:10.235 16938.535 - 17039.360: 98.9208% ( 2) 00:09:10.235 17039.360 - 17140.185: 98.9407% ( 3) 00:09:10.235 17140.185 - 17241.009: 98.9804% ( 6) 00:09:10.235 17241.009 - 17341.834: 99.0003% ( 3) 00:09:10.235 17341.834 - 17442.658: 99.0267% ( 4) 00:09:10.235 17442.658 - 17543.483: 99.0466% ( 3) 00:09:10.235 17543.483 - 17644.308: 99.0665% ( 3) 00:09:10.235 17644.308 - 17745.132: 99.0996% ( 5) 00:09:10.235 17745.132 - 17845.957: 99.1261% ( 4) 00:09:10.236 17845.957 - 17946.782: 99.1459% ( 3) 00:09:10.236 17946.782 - 18047.606: 99.1525% ( 1) 00:09:10.236 31860.578 - 32062.228: 99.1856% ( 5) 00:09:10.236 32062.228 - 32263.877: 99.2651% ( 12) 00:09:10.236 32263.877 - 32465.526: 99.3181% ( 8) 00:09:10.236 32465.526 - 32667.175: 99.3710% ( 8) 00:09:10.236 32667.175 - 32868.825: 99.4306% ( 9) 00:09:10.236 32868.825 - 33070.474: 99.4902% ( 9) 00:09:10.236 33070.474 - 33272.123: 99.5564% ( 10) 00:09:10.236 33272.123 - 33473.772: 99.5961% ( 6) 00:09:10.236 33473.772 - 33675.422: 99.6690% ( 11) 00:09:10.236 33675.422 - 33877.071: 99.7219% ( 8) 00:09:10.236 33877.071 - 34078.720: 99.7881% ( 10) 00:09:10.236 34078.720 - 34280.369: 99.8543% ( 10) 00:09:10.236 34280.369 - 34482.018: 99.9139% ( 9) 00:09:10.236 34482.018 - 34683.668: 99.9669% ( 8) 00:09:10.236 34683.668 - 34885.317: 100.0000% ( 5) 00:09:10.236 00:09:10.236 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:10.236 ============================================================================== 00:09:10.236 Range in us Cumulative IO count 00:09:10.236 4940.406 - 4965.612: 0.0066% ( 1) 00:09:10.236 4965.612 - 4990.818: 0.0265% ( 3) 00:09:10.236 4990.818 - 5016.025: 0.0397% ( 2) 00:09:10.236 5016.025 - 5041.231: 0.0530% ( 2) 00:09:10.236 5041.231 - 5066.437: 0.0662% ( 2) 00:09:10.236 5066.437 - 5091.643: 0.0794% ( 2) 00:09:10.236 5091.643 - 5116.849: 0.0927% ( 2) 00:09:10.236 5116.849 - 5142.055: 0.1059% ( 2) 00:09:10.236 5142.055 - 5167.262: 0.1192% ( 2) 00:09:10.236 5167.262 - 5192.468: 0.1390% ( 3) 00:09:10.236 5192.468 - 5217.674: 0.1523% ( 2) 00:09:10.236 5217.674 - 5242.880: 0.1655% ( 2) 00:09:10.236 5242.880 - 5268.086: 0.1788% ( 2) 00:09:10.236 5268.086 - 5293.292: 0.1920% ( 2) 00:09:10.236 5293.292 - 5318.498: 0.2052% ( 2) 00:09:10.236 5318.498 - 5343.705: 0.2185% ( 2) 00:09:10.236 5343.705 - 5368.911: 0.2317% ( 2) 00:09:10.236 5368.911 - 5394.117: 0.2516% ( 3) 00:09:10.236 5394.117 - 5419.323: 0.2648% ( 2) 00:09:10.236 5419.323 - 5444.529: 0.2781% ( 2) 00:09:10.236 5444.529 - 5469.735: 0.2913% ( 2) 00:09:10.236 5469.735 - 5494.942: 0.3046% ( 2) 00:09:10.236 5494.942 - 5520.148: 0.3178% ( 2) 00:09:10.236 5520.148 - 5545.354: 0.3443% ( 4) 00:09:10.236 5545.354 - 5570.560: 0.4105% ( 10) 00:09:10.236 5570.560 - 5595.766: 0.6488% ( 36) 00:09:10.236 5595.766 - 5620.972: 0.9865% ( 51) 00:09:10.236 5620.972 - 5646.178: 1.3904% ( 61) 00:09:10.236 5646.178 - 5671.385: 1.8869% ( 75) 00:09:10.236 5671.385 - 5696.591: 2.4298% ( 82) 00:09:10.236 5696.591 - 5721.797: 3.0389% ( 92) 00:09:10.236 5721.797 - 5747.003: 3.6282% ( 89) 00:09:10.236 5747.003 - 5772.209: 4.2969% ( 101) 00:09:10.236 5772.209 - 5797.415: 4.9590% ( 100) 00:09:10.236 5797.415 - 5822.622: 5.6541% ( 105) 00:09:10.236 5822.622 - 5847.828: 6.4155% ( 115) 00:09:10.236 5847.828 - 5873.034: 7.0975% ( 103) 00:09:10.236 5873.034 - 5898.240: 7.9251% ( 125) 00:09:10.236 5898.240 - 5923.446: 8.7129% ( 119) 00:09:10.236 5923.446 - 5948.652: 9.4942% ( 118) 00:09:10.236 5948.652 - 5973.858: 10.3549% ( 130) 00:09:10.236 5973.858 - 5999.065: 11.3083% ( 144) 00:09:10.236 5999.065 - 6024.271: 12.1690% ( 130) 00:09:10.236 6024.271 - 6049.477: 13.0694% ( 136) 00:09:10.236 6049.477 - 6074.683: 13.9168% ( 128) 00:09:10.236 6074.683 - 6099.889: 14.7908% ( 132) 00:09:10.236 6099.889 - 6125.095: 15.6316% ( 127) 00:09:10.236 6125.095 - 6150.302: 16.5519% ( 139) 00:09:10.236 6150.302 - 6175.508: 17.3729% ( 124) 00:09:10.236 6175.508 - 6200.714: 18.2865% ( 138) 00:09:10.236 6200.714 - 6225.920: 19.1936% ( 137) 00:09:10.236 6225.920 - 6251.126: 20.0278% ( 126) 00:09:10.236 6251.126 - 6276.332: 20.8885% ( 130) 00:09:10.236 6276.332 - 6301.538: 21.8419% ( 144) 00:09:10.236 6301.538 - 6326.745: 22.7953% ( 144) 00:09:10.236 6326.745 - 6351.951: 23.7156% ( 139) 00:09:10.236 6351.951 - 6377.157: 24.6425% ( 140) 00:09:10.236 6377.157 - 6402.363: 25.6356% ( 150) 00:09:10.236 6402.363 - 6427.569: 26.5691% ( 141) 00:09:10.236 6427.569 - 6452.775: 27.5159% ( 143) 00:09:10.236 6452.775 - 6503.188: 29.3631% ( 279) 00:09:10.236 6503.188 - 6553.600: 31.1639% ( 272) 00:09:10.236 6553.600 - 6604.012: 33.0310% ( 282) 00:09:10.236 6604.012 - 6654.425: 34.8980% ( 282) 00:09:10.236 6654.425 - 6704.837: 36.7982% ( 287) 00:09:10.236 6704.837 - 6755.249: 38.6520% ( 280) 00:09:10.236 6755.249 - 6805.662: 40.5191% ( 282) 00:09:10.236 6805.662 - 6856.074: 42.3729% ( 280) 00:09:10.236 6856.074 - 6906.486: 44.2598% ( 285) 00:09:10.236 6906.486 - 6956.898: 46.1202% ( 281) 00:09:10.236 6956.898 - 7007.311: 48.0072% ( 285) 00:09:10.236 7007.311 - 7057.723: 49.8808% ( 283) 00:09:10.236 7057.723 - 7108.135: 51.7148% ( 277) 00:09:10.236 7108.135 - 7158.548: 53.5752% ( 281) 00:09:10.236 7158.548 - 7208.960: 55.4555% ( 284) 00:09:10.236 7208.960 - 7259.372: 57.3027% ( 279) 00:09:10.236 7259.372 - 7309.785: 59.2029% ( 287) 00:09:10.236 7309.785 - 7360.197: 61.0302% ( 276) 00:09:10.236 7360.197 - 7410.609: 62.8774% ( 279) 00:09:10.236 7410.609 - 7461.022: 64.3803% ( 227) 00:09:10.236 7461.022 - 7511.434: 65.6382% ( 190) 00:09:10.236 7511.434 - 7561.846: 66.6380% ( 151) 00:09:10.236 7561.846 - 7612.258: 67.4126% ( 117) 00:09:10.236 7612.258 - 7662.671: 68.0416% ( 95) 00:09:10.236 7662.671 - 7713.083: 68.5779% ( 81) 00:09:10.236 7713.083 - 7763.495: 69.0413% ( 70) 00:09:10.236 7763.495 - 7813.908: 69.3392% ( 45) 00:09:10.236 7813.908 - 7864.320: 69.6041% ( 40) 00:09:10.236 7864.320 - 7914.732: 69.8093% ( 31) 00:09:10.236 7914.732 - 7965.145: 70.0013% ( 29) 00:09:10.236 7965.145 - 8015.557: 70.1801% ( 27) 00:09:10.236 8015.557 - 8065.969: 70.3655% ( 28) 00:09:10.236 8065.969 - 8116.382: 70.5442% ( 27) 00:09:10.236 8116.382 - 8166.794: 70.6899% ( 22) 00:09:10.236 8166.794 - 8217.206: 70.8157% ( 19) 00:09:10.236 8217.206 - 8267.618: 70.9017% ( 13) 00:09:10.236 8267.618 - 8318.031: 71.0011% ( 15) 00:09:10.236 8318.031 - 8368.443: 71.0606% ( 9) 00:09:10.236 8368.443 - 8418.855: 71.1467% ( 13) 00:09:10.236 8418.855 - 8469.268: 71.2063% ( 9) 00:09:10.236 8469.268 - 8519.680: 71.2659% ( 9) 00:09:10.236 8519.680 - 8570.092: 71.3387% ( 11) 00:09:10.236 8570.092 - 8620.505: 71.4049% ( 10) 00:09:10.236 8620.505 - 8670.917: 71.4711% ( 10) 00:09:10.236 8670.917 - 8721.329: 71.5307% ( 9) 00:09:10.236 8721.329 - 8771.742: 71.5969% ( 10) 00:09:10.236 8771.742 - 8822.154: 71.6631% ( 10) 00:09:10.236 8822.154 - 8872.566: 71.7293% ( 10) 00:09:10.236 8872.566 - 8922.978: 71.8154% ( 13) 00:09:10.236 8922.978 - 8973.391: 71.9015% ( 13) 00:09:10.236 8973.391 - 9023.803: 71.9876% ( 13) 00:09:10.236 9023.803 - 9074.215: 72.0736% ( 13) 00:09:10.236 9074.215 - 9124.628: 72.1465% ( 11) 00:09:10.236 9124.628 - 9175.040: 72.2656% ( 18) 00:09:10.236 9175.040 - 9225.452: 72.3782% ( 17) 00:09:10.236 9225.452 - 9275.865: 72.4907% ( 17) 00:09:10.236 9275.865 - 9326.277: 72.5967% ( 16) 00:09:10.236 9326.277 - 9376.689: 72.7026% ( 16) 00:09:10.236 9376.689 - 9427.102: 72.8019% ( 15) 00:09:10.236 9427.102 - 9477.514: 72.9145% ( 17) 00:09:10.236 9477.514 - 9527.926: 73.0204% ( 16) 00:09:10.236 9527.926 - 9578.338: 73.1396% ( 18) 00:09:10.236 9578.338 - 9628.751: 73.3117% ( 26) 00:09:10.236 9628.751 - 9679.163: 73.4971% ( 28) 00:09:10.236 9679.163 - 9729.575: 73.6891% ( 29) 00:09:10.236 9729.575 - 9779.988: 73.8546% ( 25) 00:09:10.236 9779.988 - 9830.400: 74.0267% ( 26) 00:09:10.236 9830.400 - 9880.812: 74.2254% ( 30) 00:09:10.236 9880.812 - 9931.225: 74.4372% ( 32) 00:09:10.236 9931.225 - 9981.637: 74.6226% ( 28) 00:09:10.236 9981.637 - 10032.049: 74.8212% ( 30) 00:09:10.236 10032.049 - 10082.462: 74.9934% ( 26) 00:09:10.236 10082.462 - 10132.874: 75.1854% ( 29) 00:09:10.236 10132.874 - 10183.286: 75.3840% ( 30) 00:09:10.236 10183.286 - 10233.698: 75.5892% ( 31) 00:09:10.236 10233.698 - 10284.111: 75.7879% ( 30) 00:09:10.236 10284.111 - 10334.523: 75.9997% ( 32) 00:09:10.236 10334.523 - 10384.935: 76.2050% ( 31) 00:09:10.236 10384.935 - 10435.348: 76.4102% ( 31) 00:09:10.237 10435.348 - 10485.760: 76.6486% ( 36) 00:09:10.237 10485.760 - 10536.172: 76.8671% ( 33) 00:09:10.237 10536.172 - 10586.585: 77.1253% ( 39) 00:09:10.237 10586.585 - 10636.997: 77.4100% ( 43) 00:09:10.237 10636.997 - 10687.409: 77.6748% ( 40) 00:09:10.237 10687.409 - 10737.822: 77.9529% ( 42) 00:09:10.237 10737.822 - 10788.234: 78.2177% ( 40) 00:09:10.237 10788.234 - 10838.646: 78.4759% ( 39) 00:09:10.237 10838.646 - 10889.058: 78.7606% ( 43) 00:09:10.237 10889.058 - 10939.471: 79.0585% ( 45) 00:09:10.237 10939.471 - 10989.883: 79.3366% ( 42) 00:09:10.237 10989.883 - 11040.295: 79.6213% ( 43) 00:09:10.237 11040.295 - 11090.708: 79.9060% ( 43) 00:09:10.237 11090.708 - 11141.120: 80.2238% ( 48) 00:09:10.237 11141.120 - 11191.532: 80.4820% ( 39) 00:09:10.237 11191.532 - 11241.945: 80.7402% ( 39) 00:09:10.237 11241.945 - 11292.357: 80.9918% ( 38) 00:09:10.237 11292.357 - 11342.769: 81.2632% ( 41) 00:09:10.237 11342.769 - 11393.182: 81.4950% ( 35) 00:09:10.237 11393.182 - 11443.594: 81.7466% ( 38) 00:09:10.237 11443.594 - 11494.006: 81.9849% ( 36) 00:09:10.237 11494.006 - 11544.418: 82.2497% ( 40) 00:09:10.237 11544.418 - 11594.831: 82.4748% ( 34) 00:09:10.237 11594.831 - 11645.243: 82.7066% ( 35) 00:09:10.237 11645.243 - 11695.655: 82.9251% ( 33) 00:09:10.237 11695.655 - 11746.068: 83.1502% ( 34) 00:09:10.237 11746.068 - 11796.480: 83.3951% ( 37) 00:09:10.237 11796.480 - 11846.892: 83.5938% ( 30) 00:09:10.237 11846.892 - 11897.305: 83.7858% ( 29) 00:09:10.237 11897.305 - 11947.717: 83.9910% ( 31) 00:09:10.237 11947.717 - 11998.129: 84.1962% ( 31) 00:09:10.237 11998.129 - 12048.542: 84.3949% ( 30) 00:09:10.237 12048.542 - 12098.954: 84.6332% ( 36) 00:09:10.237 12098.954 - 12149.366: 84.8252% ( 29) 00:09:10.237 12149.366 - 12199.778: 85.0305% ( 31) 00:09:10.237 12199.778 - 12250.191: 85.2357% ( 31) 00:09:10.237 12250.191 - 12300.603: 85.4476% ( 32) 00:09:10.237 12300.603 - 12351.015: 85.7124% ( 40) 00:09:10.237 12351.015 - 12401.428: 85.9507% ( 36) 00:09:10.237 12401.428 - 12451.840: 86.2090% ( 39) 00:09:10.237 12451.840 - 12502.252: 86.4274% ( 33) 00:09:10.237 12502.252 - 12552.665: 86.6989% ( 41) 00:09:10.237 12552.665 - 12603.077: 86.9703% ( 41) 00:09:10.237 12603.077 - 12653.489: 87.1954% ( 34) 00:09:10.237 12653.489 - 12703.902: 87.4007% ( 31) 00:09:10.237 12703.902 - 12754.314: 87.6059% ( 31) 00:09:10.237 12754.314 - 12804.726: 87.8112% ( 31) 00:09:10.237 12804.726 - 12855.138: 87.9833% ( 26) 00:09:10.237 12855.138 - 12905.551: 88.1753% ( 29) 00:09:10.237 12905.551 - 13006.375: 88.4865% ( 47) 00:09:10.237 13006.375 - 13107.200: 88.8175% ( 50) 00:09:10.237 13107.200 - 13208.025: 89.2280% ( 62) 00:09:10.237 13208.025 - 13308.849: 89.6385% ( 62) 00:09:10.237 13308.849 - 13409.674: 90.1152% ( 72) 00:09:10.237 13409.674 - 13510.498: 90.5654% ( 68) 00:09:10.237 13510.498 - 13611.323: 91.0487% ( 73) 00:09:10.237 13611.323 - 13712.148: 91.5056% ( 69) 00:09:10.237 13712.148 - 13812.972: 91.9889% ( 73) 00:09:10.237 13812.972 - 13913.797: 92.4523% ( 70) 00:09:10.237 13913.797 - 14014.622: 92.8827% ( 65) 00:09:10.237 14014.622 - 14115.446: 93.2998% ( 63) 00:09:10.237 14115.446 - 14216.271: 93.6772% ( 57) 00:09:10.237 14216.271 - 14317.095: 94.0612% ( 58) 00:09:10.237 14317.095 - 14417.920: 94.4253% ( 55) 00:09:10.237 14417.920 - 14518.745: 94.8226% ( 60) 00:09:10.237 14518.745 - 14619.569: 95.2198% ( 60) 00:09:10.237 14619.569 - 14720.394: 95.5707% ( 53) 00:09:10.237 14720.394 - 14821.218: 95.8885% ( 48) 00:09:10.237 14821.218 - 14922.043: 96.2328% ( 52) 00:09:10.237 14922.043 - 15022.868: 96.5506% ( 48) 00:09:10.237 15022.868 - 15123.692: 96.8353% ( 43) 00:09:10.237 15123.692 - 15224.517: 97.1001% ( 40) 00:09:10.237 15224.517 - 15325.342: 97.3451% ( 37) 00:09:10.237 15325.342 - 15426.166: 97.5702% ( 34) 00:09:10.237 15426.166 - 15526.991: 97.7688% ( 30) 00:09:10.237 15526.991 - 15627.815: 97.9476% ( 27) 00:09:10.237 15627.815 - 15728.640: 98.0866% ( 21) 00:09:10.237 15728.640 - 15829.465: 98.2323% ( 22) 00:09:10.237 15829.465 - 15930.289: 98.3713% ( 21) 00:09:10.237 15930.289 - 16031.114: 98.4772% ( 16) 00:09:10.237 16031.114 - 16131.938: 98.5699% ( 14) 00:09:10.237 16131.938 - 16232.763: 98.6494% ( 12) 00:09:10.237 16232.763 - 16333.588: 98.7222% ( 11) 00:09:10.237 16333.588 - 16434.412: 98.7884% ( 10) 00:09:10.237 16434.412 - 16535.237: 98.8281% ( 6) 00:09:10.237 16535.237 - 16636.062: 98.8612% ( 5) 00:09:10.237 16636.062 - 16736.886: 98.8877% ( 4) 00:09:10.237 16736.886 - 16837.711: 98.9208% ( 5) 00:09:10.237 16837.711 - 16938.535: 98.9473% ( 4) 00:09:10.237 16938.535 - 17039.360: 98.9738% ( 4) 00:09:10.237 17039.360 - 17140.185: 99.0003% ( 4) 00:09:10.237 17140.185 - 17241.009: 99.0334% ( 5) 00:09:10.237 17241.009 - 17341.834: 99.0665% ( 5) 00:09:10.237 17341.834 - 17442.658: 99.0930% ( 4) 00:09:10.237 17442.658 - 17543.483: 99.1261% ( 5) 00:09:10.237 17543.483 - 17644.308: 99.1525% ( 4) 00:09:10.237 33675.422 - 33877.071: 99.1923% ( 6) 00:09:10.237 33877.071 - 34078.720: 99.2519% ( 9) 00:09:10.237 34078.720 - 34280.369: 99.3114% ( 9) 00:09:10.237 34280.369 - 34482.018: 99.3843% ( 11) 00:09:10.237 34482.018 - 34683.668: 99.4505% ( 10) 00:09:10.237 34683.668 - 34885.317: 99.5167% ( 10) 00:09:10.237 34885.317 - 35086.966: 99.5763% ( 9) 00:09:10.237 35086.966 - 35288.615: 99.6425% ( 10) 00:09:10.237 35288.615 - 35490.265: 99.7087% ( 10) 00:09:10.237 35490.265 - 35691.914: 99.7749% ( 10) 00:09:10.237 35691.914 - 35893.563: 99.8411% ( 10) 00:09:10.237 35893.563 - 36095.212: 99.9007% ( 9) 00:09:10.237 36095.212 - 36296.862: 99.9669% ( 10) 00:09:10.237 36296.862 - 36498.511: 100.0000% ( 5) 00:09:10.237 00:09:10.237 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:10.237 ============================================================================== 00:09:10.237 Range in us Cumulative IO count 00:09:10.237 3906.954 - 3932.160: 0.0331% ( 5) 00:09:10.237 3932.160 - 3957.366: 0.0794% ( 7) 00:09:10.237 3957.366 - 3982.572: 0.0993% ( 3) 00:09:10.237 4032.985 - 4058.191: 0.1126% ( 2) 00:09:10.237 4058.191 - 4083.397: 0.1324% ( 3) 00:09:10.237 4083.397 - 4108.603: 0.1457% ( 2) 00:09:10.237 4108.603 - 4133.809: 0.1589% ( 2) 00:09:10.237 4133.809 - 4159.015: 0.1721% ( 2) 00:09:10.237 4159.015 - 4184.222: 0.1854% ( 2) 00:09:10.237 4184.222 - 4209.428: 0.2052% ( 3) 00:09:10.237 4209.428 - 4234.634: 0.2185% ( 2) 00:09:10.237 4234.634 - 4259.840: 0.2317% ( 2) 00:09:10.237 4259.840 - 4285.046: 0.2450% ( 2) 00:09:10.237 4285.046 - 4310.252: 0.2582% ( 2) 00:09:10.237 4310.252 - 4335.458: 0.2715% ( 2) 00:09:10.237 4335.458 - 4360.665: 0.2913% ( 3) 00:09:10.237 4360.665 - 4385.871: 0.3046% ( 2) 00:09:10.237 4385.871 - 4411.077: 0.3178% ( 2) 00:09:10.237 4411.077 - 4436.283: 0.3244% ( 1) 00:09:10.237 4436.283 - 4461.489: 0.3377% ( 2) 00:09:10.237 4461.489 - 4486.695: 0.3509% ( 2) 00:09:10.237 4486.695 - 4511.902: 0.3708% ( 3) 00:09:10.237 4511.902 - 4537.108: 0.3840% ( 2) 00:09:10.237 4537.108 - 4562.314: 0.3972% ( 2) 00:09:10.237 4562.314 - 4587.520: 0.4105% ( 2) 00:09:10.237 4587.520 - 4612.726: 0.4237% ( 2) 00:09:10.237 4612.726 - 4637.932: 0.4370% ( 2) 00:09:10.237 4637.932 - 4663.138: 0.4502% ( 2) 00:09:10.237 4663.138 - 4688.345: 0.4701% ( 3) 00:09:10.237 4688.345 - 4713.551: 0.4833% ( 2) 00:09:10.237 4713.551 - 4738.757: 0.4966% ( 2) 00:09:10.237 4738.757 - 4763.963: 0.5098% ( 2) 00:09:10.237 4763.963 - 4789.169: 0.5230% ( 2) 00:09:10.237 4789.169 - 4814.375: 0.5363% ( 2) 00:09:10.237 4814.375 - 4839.582: 0.5561% ( 3) 00:09:10.237 4839.582 - 4864.788: 0.5694% ( 2) 00:09:10.237 4864.788 - 4889.994: 0.5826% ( 2) 00:09:10.237 4889.994 - 4915.200: 0.5959% ( 2) 00:09:10.237 4915.200 - 4940.406: 0.6091% ( 2) 00:09:10.237 4940.406 - 4965.612: 0.6224% ( 2) 00:09:10.237 4965.612 - 4990.818: 0.6422% ( 3) 00:09:10.237 4990.818 - 5016.025: 0.6555% ( 2) 00:09:10.237 5016.025 - 5041.231: 0.6687% ( 2) 00:09:10.237 5041.231 - 5066.437: 0.6819% ( 2) 00:09:10.237 5066.437 - 5091.643: 0.6952% ( 2) 00:09:10.237 5091.643 - 5116.849: 0.7084% ( 2) 00:09:10.237 5116.849 - 5142.055: 0.7217% ( 2) 00:09:10.237 5142.055 - 5167.262: 0.7349% ( 2) 00:09:10.237 5167.262 - 5192.468: 0.7548% ( 3) 00:09:10.237 5192.468 - 5217.674: 0.7680% ( 2) 00:09:10.237 5217.674 - 5242.880: 0.7812% ( 2) 00:09:10.237 5242.880 - 5268.086: 0.7945% ( 2) 00:09:10.237 5268.086 - 5293.292: 0.8077% ( 2) 00:09:10.237 5293.292 - 5318.498: 0.8276% ( 3) 00:09:10.237 5343.705 - 5368.911: 0.8408% ( 2) 00:09:10.237 5368.911 - 5394.117: 0.8475% ( 1) 00:09:10.237 5494.942 - 5520.148: 0.8607% ( 2) 00:09:10.237 5520.148 - 5545.354: 0.9070% ( 7) 00:09:10.237 5545.354 - 5570.560: 1.0726% ( 25) 00:09:10.237 5570.560 - 5595.766: 1.3109% ( 36) 00:09:10.237 5595.766 - 5620.972: 1.5824% ( 41) 00:09:10.237 5620.972 - 5646.178: 1.9134% ( 50) 00:09:10.237 5646.178 - 5671.385: 2.3835% ( 71) 00:09:10.237 5671.385 - 5696.591: 2.9264% ( 82) 00:09:10.237 5696.591 - 5721.797: 3.5289% ( 91) 00:09:10.237 5721.797 - 5747.003: 4.2240% ( 105) 00:09:10.237 5747.003 - 5772.209: 4.8861% ( 100) 00:09:10.237 5772.209 - 5797.415: 5.5614% ( 102) 00:09:10.237 5797.415 - 5822.622: 6.2566% ( 105) 00:09:10.237 5822.622 - 5847.828: 6.9849% ( 110) 00:09:10.237 5847.828 - 5873.034: 7.7595% ( 117) 00:09:10.237 5873.034 - 5898.240: 8.5938% ( 126) 00:09:10.237 5898.240 - 5923.446: 9.2956% ( 106) 00:09:10.237 5923.446 - 5948.652: 10.0437% ( 113) 00:09:10.237 5948.652 - 5973.858: 10.9441% ( 136) 00:09:10.237 5973.858 - 5999.065: 11.8512% ( 137) 00:09:10.238 5999.065 - 6024.271: 12.7052% ( 129) 00:09:10.238 6024.271 - 6049.477: 13.5659% ( 130) 00:09:10.238 6049.477 - 6074.683: 14.4134% ( 128) 00:09:10.238 6074.683 - 6099.889: 15.3469% ( 141) 00:09:10.238 6099.889 - 6125.095: 16.1944% ( 128) 00:09:10.238 6125.095 - 6150.302: 17.0683% ( 132) 00:09:10.238 6150.302 - 6175.508: 17.9224% ( 129) 00:09:10.238 6175.508 - 6200.714: 18.8890% ( 146) 00:09:10.238 6200.714 - 6225.920: 19.7299% ( 127) 00:09:10.238 6225.920 - 6251.126: 20.6038% ( 132) 00:09:10.238 6251.126 - 6276.332: 21.4778% ( 132) 00:09:10.238 6276.332 - 6301.538: 22.3980% ( 139) 00:09:10.238 6301.538 - 6326.745: 23.3382% ( 142) 00:09:10.238 6326.745 - 6351.951: 24.2519% ( 138) 00:09:10.238 6351.951 - 6377.157: 25.1126% ( 130) 00:09:10.238 6377.157 - 6402.363: 26.0792% ( 146) 00:09:10.238 6402.363 - 6427.569: 26.9465% ( 131) 00:09:10.238 6427.569 - 6452.775: 27.8602% ( 138) 00:09:10.238 6452.775 - 6503.188: 29.7140% ( 280) 00:09:10.238 6503.188 - 6553.600: 31.4817% ( 267) 00:09:10.238 6553.600 - 6604.012: 33.3223% ( 278) 00:09:10.238 6604.012 - 6654.425: 35.1629% ( 278) 00:09:10.238 6654.425 - 6704.837: 37.0034% ( 278) 00:09:10.238 6704.837 - 6755.249: 38.7910% ( 270) 00:09:10.238 6755.249 - 6805.662: 40.6846% ( 286) 00:09:10.238 6805.662 - 6856.074: 42.5053% ( 275) 00:09:10.238 6856.074 - 6906.486: 44.3724% ( 282) 00:09:10.238 6906.486 - 6956.898: 46.1997% ( 276) 00:09:10.238 6956.898 - 7007.311: 48.0800% ( 284) 00:09:10.238 7007.311 - 7057.723: 49.9007% ( 275) 00:09:10.238 7057.723 - 7108.135: 51.7545% ( 280) 00:09:10.238 7108.135 - 7158.548: 53.6083% ( 280) 00:09:10.238 7158.548 - 7208.960: 55.4290% ( 275) 00:09:10.238 7208.960 - 7259.372: 57.2100% ( 269) 00:09:10.238 7259.372 - 7309.785: 59.0373% ( 276) 00:09:10.238 7309.785 - 7360.197: 60.8845% ( 279) 00:09:10.238 7360.197 - 7410.609: 62.5861% ( 257) 00:09:10.238 7410.609 - 7461.022: 64.0890% ( 227) 00:09:10.238 7461.022 - 7511.434: 65.3006% ( 183) 00:09:10.238 7511.434 - 7561.846: 66.2275% ( 140) 00:09:10.238 7561.846 - 7612.258: 66.9094% ( 103) 00:09:10.238 7612.258 - 7662.671: 67.5450% ( 96) 00:09:10.238 7662.671 - 7713.083: 68.0548% ( 77) 00:09:10.238 7713.083 - 7763.495: 68.4653% ( 62) 00:09:10.238 7763.495 - 7813.908: 68.8030% ( 51) 00:09:10.238 7813.908 - 7864.320: 69.0744% ( 41) 00:09:10.238 7864.320 - 7914.732: 69.2797% ( 31) 00:09:10.238 7914.732 - 7965.145: 69.4650% ( 28) 00:09:10.238 7965.145 - 8015.557: 69.6570% ( 29) 00:09:10.238 8015.557 - 8065.969: 69.8159% ( 24) 00:09:10.238 8065.969 - 8116.382: 69.9947% ( 27) 00:09:10.238 8116.382 - 8166.794: 70.1205% ( 19) 00:09:10.238 8166.794 - 8217.206: 70.2264% ( 16) 00:09:10.238 8217.206 - 8267.618: 70.2993% ( 11) 00:09:10.238 8267.618 - 8318.031: 70.3853% ( 13) 00:09:10.238 8318.031 - 8368.443: 70.4780% ( 14) 00:09:10.238 8368.443 - 8418.855: 70.5641% ( 13) 00:09:10.238 8418.855 - 8469.268: 70.6369% ( 11) 00:09:10.238 8469.268 - 8519.680: 70.7230% ( 13) 00:09:10.238 8519.680 - 8570.092: 70.8024% ( 12) 00:09:10.238 8570.092 - 8620.505: 70.8885% ( 13) 00:09:10.238 8620.505 - 8670.917: 70.9680% ( 12) 00:09:10.238 8670.917 - 8721.329: 71.0606% ( 14) 00:09:10.238 8721.329 - 8771.742: 71.1401% ( 12) 00:09:10.238 8771.742 - 8822.154: 71.2328% ( 14) 00:09:10.238 8822.154 - 8872.566: 71.3387% ( 16) 00:09:10.238 8872.566 - 8922.978: 71.4513% ( 17) 00:09:10.238 8922.978 - 8973.391: 71.5506% ( 15) 00:09:10.238 8973.391 - 9023.803: 71.6565% ( 16) 00:09:10.238 9023.803 - 9074.215: 71.7889% ( 20) 00:09:10.238 9074.215 - 9124.628: 71.9147% ( 19) 00:09:10.238 9124.628 - 9175.040: 72.0471% ( 20) 00:09:10.238 9175.040 - 9225.452: 72.1994% ( 23) 00:09:10.238 9225.452 - 9275.865: 72.3716% ( 26) 00:09:10.238 9275.865 - 9326.277: 72.5371% ( 25) 00:09:10.238 9326.277 - 9376.689: 72.6960% ( 24) 00:09:10.238 9376.689 - 9427.102: 72.8416% ( 22) 00:09:10.238 9427.102 - 9477.514: 72.9873% ( 22) 00:09:10.238 9477.514 - 9527.926: 73.1528% ( 25) 00:09:10.238 9527.926 - 9578.338: 73.3183% ( 25) 00:09:10.238 9578.338 - 9628.751: 73.5103% ( 29) 00:09:10.238 9628.751 - 9679.163: 73.6758% ( 25) 00:09:10.238 9679.163 - 9729.575: 73.8811% ( 31) 00:09:10.238 9729.575 - 9779.988: 74.1062% ( 34) 00:09:10.238 9779.988 - 9830.400: 74.3644% ( 39) 00:09:10.238 9830.400 - 9880.812: 74.5697% ( 31) 00:09:10.238 9880.812 - 9931.225: 74.7815% ( 32) 00:09:10.238 9931.225 - 9981.637: 75.0265% ( 37) 00:09:10.238 9981.637 - 10032.049: 75.2582% ( 35) 00:09:10.238 10032.049 - 10082.462: 75.4701% ( 32) 00:09:10.238 10082.462 - 10132.874: 75.7283% ( 39) 00:09:10.238 10132.874 - 10183.286: 75.9931% ( 40) 00:09:10.238 10183.286 - 10233.698: 76.2381% ( 37) 00:09:10.238 10233.698 - 10284.111: 76.4698% ( 35) 00:09:10.238 10284.111 - 10334.523: 76.7413% ( 41) 00:09:10.238 10334.523 - 10384.935: 77.0127% ( 41) 00:09:10.238 10384.935 - 10435.348: 77.2974% ( 43) 00:09:10.238 10435.348 - 10485.760: 77.5424% ( 37) 00:09:10.238 10485.760 - 10536.172: 77.8138% ( 41) 00:09:10.238 10536.172 - 10586.585: 78.0919% ( 42) 00:09:10.238 10586.585 - 10636.997: 78.3633% ( 41) 00:09:10.238 10636.997 - 10687.409: 78.6414% ( 42) 00:09:10.238 10687.409 - 10737.822: 78.9195% ( 42) 00:09:10.238 10737.822 - 10788.234: 79.1843% ( 40) 00:09:10.238 10788.234 - 10838.646: 79.4624% ( 42) 00:09:10.238 10838.646 - 10889.058: 79.7206% ( 39) 00:09:10.238 10889.058 - 10939.471: 79.9921% ( 41) 00:09:10.238 10939.471 - 10989.883: 80.2635% ( 41) 00:09:10.238 10989.883 - 11040.295: 80.5151% ( 38) 00:09:10.238 11040.295 - 11090.708: 80.7733% ( 39) 00:09:10.238 11090.708 - 11141.120: 81.0183% ( 37) 00:09:10.238 11141.120 - 11191.532: 81.2500% ( 35) 00:09:10.238 11191.532 - 11241.945: 81.4883% ( 36) 00:09:10.238 11241.945 - 11292.357: 81.7267% ( 36) 00:09:10.238 11292.357 - 11342.769: 81.9849% ( 39) 00:09:10.238 11342.769 - 11393.182: 82.2497% ( 40) 00:09:10.238 11393.182 - 11443.594: 82.4748% ( 34) 00:09:10.238 11443.594 - 11494.006: 82.7132% ( 36) 00:09:10.238 11494.006 - 11544.418: 82.9449% ( 35) 00:09:10.238 11544.418 - 11594.831: 83.1700% ( 34) 00:09:10.238 11594.831 - 11645.243: 83.3819% ( 32) 00:09:10.238 11645.243 - 11695.655: 83.6533% ( 41) 00:09:10.238 11695.655 - 11746.068: 83.8917% ( 36) 00:09:10.238 11746.068 - 11796.480: 84.0771% ( 28) 00:09:10.238 11796.480 - 11846.892: 84.2691% ( 29) 00:09:10.238 11846.892 - 11897.305: 84.4743% ( 31) 00:09:10.238 11897.305 - 11947.717: 84.6796% ( 31) 00:09:10.238 11947.717 - 11998.129: 84.8451% ( 25) 00:09:10.238 11998.129 - 12048.542: 85.0106% ( 25) 00:09:10.238 12048.542 - 12098.954: 85.1960% ( 28) 00:09:10.238 12098.954 - 12149.366: 85.3747% ( 27) 00:09:10.238 12149.366 - 12199.778: 85.5601% ( 28) 00:09:10.238 12199.778 - 12250.191: 85.7256% ( 25) 00:09:10.238 12250.191 - 12300.603: 85.8978% ( 26) 00:09:10.238 12300.603 - 12351.015: 86.0699% ( 26) 00:09:10.238 12351.015 - 12401.428: 86.2619% ( 29) 00:09:10.238 12401.428 - 12451.840: 86.4076% ( 22) 00:09:10.238 12451.840 - 12502.252: 86.5731% ( 25) 00:09:10.238 12502.252 - 12552.665: 86.7519% ( 27) 00:09:10.238 12552.665 - 12603.077: 86.9571% ( 31) 00:09:10.238 12603.077 - 12653.489: 87.1623% ( 31) 00:09:10.238 12653.489 - 12703.902: 87.3742% ( 32) 00:09:10.238 12703.902 - 12754.314: 87.6059% ( 35) 00:09:10.238 12754.314 - 12804.726: 87.8178% ( 32) 00:09:10.238 12804.726 - 12855.138: 88.0363% ( 33) 00:09:10.238 12855.138 - 12905.551: 88.2217% ( 28) 00:09:10.238 12905.551 - 13006.375: 88.6454% ( 64) 00:09:10.238 13006.375 - 13107.200: 89.0294% ( 58) 00:09:10.238 13107.200 - 13208.025: 89.4399% ( 62) 00:09:10.238 13208.025 - 13308.849: 89.8570% ( 63) 00:09:10.238 13308.849 - 13409.674: 90.3072% ( 68) 00:09:10.238 13409.674 - 13510.498: 90.7905% ( 73) 00:09:10.238 13510.498 - 13611.323: 91.2341% ( 67) 00:09:10.238 13611.323 - 13712.148: 91.7240% ( 74) 00:09:10.238 13712.148 - 13812.972: 92.2206% ( 75) 00:09:10.238 13812.972 - 13913.797: 92.7105% ( 74) 00:09:10.238 13913.797 - 14014.622: 93.1409% ( 65) 00:09:10.238 14014.622 - 14115.446: 93.5514% ( 62) 00:09:10.238 14115.446 - 14216.271: 93.9552% ( 61) 00:09:10.238 14216.271 - 14317.095: 94.3260% ( 56) 00:09:10.238 14317.095 - 14417.920: 94.6901% ( 55) 00:09:10.238 14417.920 - 14518.745: 95.0675% ( 57) 00:09:10.238 14518.745 - 14619.569: 95.4052% ( 51) 00:09:10.238 14619.569 - 14720.394: 95.7296% ( 49) 00:09:10.238 14720.394 - 14821.218: 96.0673% ( 51) 00:09:10.238 14821.218 - 14922.043: 96.3652% ( 45) 00:09:10.238 14922.043 - 15022.868: 96.6367% ( 41) 00:09:10.238 15022.868 - 15123.692: 96.8816% ( 37) 00:09:10.238 15123.692 - 15224.517: 97.1133% ( 35) 00:09:10.238 15224.517 - 15325.342: 97.3186% ( 31) 00:09:10.238 15325.342 - 15426.166: 97.5172% ( 30) 00:09:10.238 15426.166 - 15526.991: 97.6761% ( 24) 00:09:10.238 15526.991 - 15627.815: 97.8483% ( 26) 00:09:10.238 15627.815 - 15728.640: 97.9939% ( 22) 00:09:10.238 15728.640 - 15829.465: 98.1329% ( 21) 00:09:10.238 15829.465 - 15930.289: 98.2455% ( 17) 00:09:10.238 15930.289 - 16031.114: 98.3581% ( 17) 00:09:10.238 16031.114 - 16131.938: 98.4507% ( 14) 00:09:10.238 16131.938 - 16232.763: 98.5368% ( 13) 00:09:10.238 16232.763 - 16333.588: 98.5964% ( 9) 00:09:10.238 16333.588 - 16434.412: 98.6494% ( 8) 00:09:10.238 16434.412 - 16535.237: 98.7023% ( 8) 00:09:10.238 16535.237 - 16636.062: 98.7619% ( 9) 00:09:10.238 16636.062 - 16736.886: 98.8149% ( 8) 00:09:10.238 16736.886 - 16837.711: 98.8546% ( 6) 00:09:10.238 16837.711 - 16938.535: 98.8877% ( 5) 00:09:10.238 16938.535 - 17039.360: 98.9208% ( 5) 00:09:10.238 17039.360 - 17140.185: 98.9539% ( 5) 00:09:10.238 17140.185 - 17241.009: 98.9672% ( 2) 00:09:10.239 17442.658 - 17543.483: 98.9936% ( 4) 00:09:10.239 17543.483 - 17644.308: 99.0267% ( 5) 00:09:10.239 17644.308 - 17745.132: 99.0599% ( 5) 00:09:10.239 17745.132 - 17845.957: 99.0863% ( 4) 00:09:10.239 17845.957 - 17946.782: 99.1194% ( 5) 00:09:10.239 17946.782 - 18047.606: 99.1459% ( 4) 00:09:10.239 18047.606 - 18148.431: 99.1525% ( 1) 00:09:10.239 35893.563 - 36095.212: 99.1923% ( 6) 00:09:10.239 36095.212 - 36296.862: 99.2585% ( 10) 00:09:10.239 36296.862 - 36498.511: 99.3313% ( 11) 00:09:10.239 36498.511 - 36700.160: 99.3909% ( 9) 00:09:10.239 36700.160 - 36901.809: 99.4571% ( 10) 00:09:10.239 36901.809 - 37103.458: 99.5167% ( 9) 00:09:10.239 37103.458 - 37305.108: 99.5829% ( 10) 00:09:10.239 37305.108 - 37506.757: 99.6557% ( 11) 00:09:10.239 37506.757 - 37708.406: 99.7153% ( 9) 00:09:10.239 37708.406 - 37910.055: 99.7815% ( 10) 00:09:10.239 37910.055 - 38111.705: 99.8477% ( 10) 00:09:10.239 38111.705 - 38313.354: 99.9139% ( 10) 00:09:10.239 38313.354 - 38515.003: 99.9801% ( 10) 00:09:10.239 38515.003 - 38716.652: 100.0000% ( 3) 00:09:10.239 00:09:10.239 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:10.239 ============================================================================== 00:09:10.239 Range in us Cumulative IO count 00:09:10.239 3503.655 - 3528.862: 0.0197% ( 3) 00:09:10.239 3528.862 - 3554.068: 0.0263% ( 1) 00:09:10.239 3554.068 - 3579.274: 0.0394% ( 2) 00:09:10.239 3579.274 - 3604.480: 0.0460% ( 1) 00:09:10.239 3604.480 - 3629.686: 0.0591% ( 2) 00:09:10.239 3629.686 - 3654.892: 0.0722% ( 2) 00:09:10.239 3654.892 - 3680.098: 0.0985% ( 4) 00:09:10.239 3680.098 - 3705.305: 0.1116% ( 2) 00:09:10.239 3705.305 - 3730.511: 0.1247% ( 2) 00:09:10.239 3730.511 - 3755.717: 0.1313% ( 1) 00:09:10.239 3755.717 - 3780.923: 0.1444% ( 2) 00:09:10.239 3780.923 - 3806.129: 0.1576% ( 2) 00:09:10.239 3806.129 - 3831.335: 0.1707% ( 2) 00:09:10.239 3831.335 - 3856.542: 0.1838% ( 2) 00:09:10.239 3856.542 - 3881.748: 0.1970% ( 2) 00:09:10.239 3881.748 - 3906.954: 0.2166% ( 3) 00:09:10.239 3906.954 - 3932.160: 0.2298% ( 2) 00:09:10.239 3932.160 - 3957.366: 0.2429% ( 2) 00:09:10.239 3957.366 - 3982.572: 0.2560% ( 2) 00:09:10.239 3982.572 - 4007.778: 0.2692% ( 2) 00:09:10.239 4007.778 - 4032.985: 0.2889% ( 3) 00:09:10.239 4032.985 - 4058.191: 0.3020% ( 2) 00:09:10.239 4058.191 - 4083.397: 0.3151% ( 2) 00:09:10.239 4083.397 - 4108.603: 0.3283% ( 2) 00:09:10.239 4108.603 - 4133.809: 0.3414% ( 2) 00:09:10.239 4133.809 - 4159.015: 0.3611% ( 3) 00:09:10.239 4159.015 - 4184.222: 0.3742% ( 2) 00:09:10.239 4184.222 - 4209.428: 0.3873% ( 2) 00:09:10.239 4209.428 - 4234.634: 0.4005% ( 2) 00:09:10.239 4234.634 - 4259.840: 0.4136% ( 2) 00:09:10.239 4259.840 - 4285.046: 0.4267% ( 2) 00:09:10.239 4285.046 - 4310.252: 0.4399% ( 2) 00:09:10.239 4310.252 - 4335.458: 0.4530% ( 2) 00:09:10.239 4335.458 - 4360.665: 0.4661% ( 2) 00:09:10.239 4360.665 - 4385.871: 0.4793% ( 2) 00:09:10.239 4385.871 - 4411.077: 0.4924% ( 2) 00:09:10.239 4411.077 - 4436.283: 0.5055% ( 2) 00:09:10.239 4436.283 - 4461.489: 0.5186% ( 2) 00:09:10.239 4461.489 - 4486.695: 0.5318% ( 2) 00:09:10.239 4486.695 - 4511.902: 0.5449% ( 2) 00:09:10.239 4511.902 - 4537.108: 0.5646% ( 3) 00:09:10.239 4537.108 - 4562.314: 0.5777% ( 2) 00:09:10.239 4562.314 - 4587.520: 0.5909% ( 2) 00:09:10.239 4587.520 - 4612.726: 0.6040% ( 2) 00:09:10.239 4612.726 - 4637.932: 0.6171% ( 2) 00:09:10.239 4637.932 - 4663.138: 0.6303% ( 2) 00:09:10.239 4663.138 - 4688.345: 0.6434% ( 2) 00:09:10.239 4688.345 - 4713.551: 0.6565% ( 2) 00:09:10.239 4713.551 - 4738.757: 0.6696% ( 2) 00:09:10.239 4738.757 - 4763.963: 0.6828% ( 2) 00:09:10.239 4763.963 - 4789.169: 0.6959% ( 2) 00:09:10.239 4789.169 - 4814.375: 0.7090% ( 2) 00:09:10.239 4814.375 - 4839.582: 0.7222% ( 2) 00:09:10.239 4839.582 - 4864.788: 0.7353% ( 2) 00:09:10.239 4864.788 - 4889.994: 0.7484% ( 2) 00:09:10.239 4889.994 - 4915.200: 0.7616% ( 2) 00:09:10.239 4915.200 - 4940.406: 0.7747% ( 2) 00:09:10.239 4940.406 - 4965.612: 0.7878% ( 2) 00:09:10.239 4965.612 - 4990.818: 0.7944% ( 1) 00:09:10.239 4990.818 - 5016.025: 0.8075% ( 2) 00:09:10.239 5016.025 - 5041.231: 0.8141% ( 1) 00:09:10.239 5041.231 - 5066.437: 0.8338% ( 3) 00:09:10.239 5066.437 - 5091.643: 0.8403% ( 1) 00:09:10.239 5494.942 - 5520.148: 0.8469% ( 1) 00:09:10.239 5520.148 - 5545.354: 0.8994% ( 8) 00:09:10.239 5545.354 - 5570.560: 1.0176% ( 18) 00:09:10.239 5570.560 - 5595.766: 1.1949% ( 27) 00:09:10.239 5595.766 - 5620.972: 1.4903% ( 45) 00:09:10.239 5620.972 - 5646.178: 1.9039% ( 63) 00:09:10.239 5646.178 - 5671.385: 2.4554% ( 84) 00:09:10.239 5671.385 - 5696.591: 2.9937% ( 82) 00:09:10.239 5696.591 - 5721.797: 3.5583% ( 86) 00:09:10.239 5721.797 - 5747.003: 4.0769% ( 79) 00:09:10.239 5747.003 - 5772.209: 4.7269% ( 99) 00:09:10.239 5772.209 - 5797.415: 5.3900% ( 101) 00:09:10.239 5797.415 - 5822.622: 6.0727% ( 104) 00:09:10.239 5822.622 - 5847.828: 6.8277% ( 115) 00:09:10.239 5847.828 - 5873.034: 7.5630% ( 112) 00:09:10.239 5873.034 - 5898.240: 8.3311% ( 117) 00:09:10.239 5898.240 - 5923.446: 9.0664% ( 112) 00:09:10.239 5923.446 - 5948.652: 9.8017% ( 112) 00:09:10.239 5948.652 - 5973.858: 10.6289% ( 126) 00:09:10.239 5973.858 - 5999.065: 11.5415% ( 139) 00:09:10.239 5999.065 - 6024.271: 12.3753% ( 127) 00:09:10.239 6024.271 - 6049.477: 13.2747% ( 137) 00:09:10.239 6049.477 - 6074.683: 14.1347% ( 131) 00:09:10.239 6074.683 - 6099.889: 15.0079% ( 133) 00:09:10.239 6099.889 - 6125.095: 15.9467% ( 143) 00:09:10.239 6125.095 - 6150.302: 16.8592% ( 139) 00:09:10.239 6150.302 - 6175.508: 17.7061% ( 129) 00:09:10.239 6175.508 - 6200.714: 18.6187% ( 139) 00:09:10.239 6200.714 - 6225.920: 19.5312% ( 139) 00:09:10.239 6225.920 - 6251.126: 20.4372% ( 138) 00:09:10.239 6251.126 - 6276.332: 21.3432% ( 138) 00:09:10.239 6276.332 - 6301.538: 22.2164% ( 133) 00:09:10.239 6301.538 - 6326.745: 23.0830% ( 132) 00:09:10.239 6326.745 - 6351.951: 24.0021% ( 140) 00:09:10.239 6351.951 - 6377.157: 24.9081% ( 138) 00:09:10.239 6377.157 - 6402.363: 25.7944% ( 135) 00:09:10.239 6402.363 - 6427.569: 26.6741% ( 134) 00:09:10.239 6427.569 - 6452.775: 27.6129% ( 143) 00:09:10.239 6452.775 - 6503.188: 29.3789% ( 269) 00:09:10.239 6503.188 - 6553.600: 31.2237% ( 281) 00:09:10.239 6553.600 - 6604.012: 33.0226% ( 274) 00:09:10.239 6604.012 - 6654.425: 34.9462% ( 293) 00:09:10.239 6654.425 - 6704.837: 36.7450% ( 274) 00:09:10.239 6704.837 - 6755.249: 38.6358% ( 288) 00:09:10.239 6755.249 - 6805.662: 40.4477% ( 276) 00:09:10.239 6805.662 - 6856.074: 42.3451% ( 289) 00:09:10.239 6856.074 - 6906.486: 44.1767% ( 279) 00:09:10.239 6906.486 - 6956.898: 46.0478% ( 285) 00:09:10.239 6956.898 - 7007.311: 47.8598% ( 276) 00:09:10.239 7007.311 - 7057.723: 49.7308% ( 285) 00:09:10.239 7057.723 - 7108.135: 51.5625% ( 279) 00:09:10.239 7108.135 - 7158.548: 53.4270% ( 284) 00:09:10.239 7158.548 - 7208.960: 55.2784% ( 282) 00:09:10.239 7208.960 - 7259.372: 57.1232% ( 281) 00:09:10.239 7259.372 - 7309.785: 58.9811% ( 283) 00:09:10.239 7309.785 - 7360.197: 60.8259% ( 281) 00:09:10.239 7360.197 - 7410.609: 62.5657% ( 265) 00:09:10.239 7410.609 - 7461.022: 64.0625% ( 228) 00:09:10.239 7461.022 - 7511.434: 65.2639% ( 183) 00:09:10.239 7511.434 - 7561.846: 66.1436% ( 134) 00:09:10.239 7561.846 - 7612.258: 66.8527% ( 108) 00:09:10.239 7612.258 - 7662.671: 67.4829% ( 96) 00:09:10.239 7662.671 - 7713.083: 67.9884% ( 77) 00:09:10.239 7713.083 - 7763.495: 68.3889% ( 61) 00:09:10.239 7763.495 - 7813.908: 68.7697% ( 58) 00:09:10.239 7813.908 - 7864.320: 69.0651% ( 45) 00:09:10.239 7864.320 - 7914.732: 69.2686% ( 31) 00:09:10.239 7914.732 - 7965.145: 69.4262% ( 24) 00:09:10.239 7965.145 - 8015.557: 69.5575% ( 20) 00:09:10.239 8015.557 - 8065.969: 69.6888% ( 20) 00:09:10.239 8065.969 - 8116.382: 69.8070% ( 18) 00:09:10.239 8116.382 - 8166.794: 69.9120% ( 16) 00:09:10.239 8166.794 - 8217.206: 70.0236% ( 17) 00:09:10.239 8217.206 - 8267.618: 70.1090% ( 13) 00:09:10.239 8267.618 - 8318.031: 70.2075% ( 15) 00:09:10.239 8318.031 - 8368.443: 70.2862% ( 12) 00:09:10.239 8368.443 - 8418.855: 70.3782% ( 14) 00:09:10.239 8418.855 - 8469.268: 70.4504% ( 11) 00:09:10.239 8469.268 - 8519.680: 70.5095% ( 9) 00:09:10.239 8519.680 - 8570.092: 70.5882% ( 12) 00:09:10.239 8570.092 - 8620.505: 70.6801% ( 14) 00:09:10.239 8620.505 - 8670.917: 70.7918% ( 17) 00:09:10.239 8670.917 - 8721.329: 70.9034% ( 17) 00:09:10.239 8721.329 - 8771.742: 70.9887% ( 13) 00:09:10.239 8771.742 - 8822.154: 71.0806% ( 14) 00:09:10.239 8822.154 - 8872.566: 71.1660% ( 13) 00:09:10.239 8872.566 - 8922.978: 71.2513% ( 13) 00:09:10.239 8922.978 - 8973.391: 71.3367% ( 13) 00:09:10.239 8973.391 - 9023.803: 71.4220% ( 13) 00:09:10.239 9023.803 - 9074.215: 71.5270% ( 16) 00:09:10.239 9074.215 - 9124.628: 71.6190% ( 14) 00:09:10.239 9124.628 - 9175.040: 71.7240% ( 16) 00:09:10.239 9175.040 - 9225.452: 71.8356% ( 17) 00:09:10.239 9225.452 - 9275.865: 71.9407% ( 16) 00:09:10.239 9275.865 - 9326.277: 72.0457% ( 16) 00:09:10.239 9326.277 - 9376.689: 72.1507% ( 16) 00:09:10.239 9376.689 - 9427.102: 72.2689% ( 18) 00:09:10.239 9427.102 - 9477.514: 72.3871% ( 18) 00:09:10.239 9477.514 - 9527.926: 72.5118% ( 19) 00:09:10.239 9527.926 - 9578.338: 72.6694% ( 24) 00:09:10.239 9578.338 - 9628.751: 72.8335% ( 25) 00:09:10.240 9628.751 - 9679.163: 72.9714% ( 21) 00:09:10.240 9679.163 - 9729.575: 73.2012% ( 35) 00:09:10.240 9729.575 - 9779.988: 73.3915% ( 29) 00:09:10.240 9779.988 - 9830.400: 73.5885% ( 30) 00:09:10.240 9830.400 - 9880.812: 73.7986% ( 32) 00:09:10.240 9880.812 - 9931.225: 74.0087% ( 32) 00:09:10.240 9931.225 - 9981.637: 74.1925% ( 28) 00:09:10.240 9981.637 - 10032.049: 74.4617% ( 41) 00:09:10.240 10032.049 - 10082.462: 74.7046% ( 37) 00:09:10.240 10082.462 - 10132.874: 74.9409% ( 36) 00:09:10.240 10132.874 - 10183.286: 75.1904% ( 38) 00:09:10.240 10183.286 - 10233.698: 75.4464% ( 39) 00:09:10.240 10233.698 - 10284.111: 75.7025% ( 39) 00:09:10.240 10284.111 - 10334.523: 75.9585% ( 39) 00:09:10.240 10334.523 - 10384.935: 76.2211% ( 40) 00:09:10.240 10384.935 - 10435.348: 76.4575% ( 36) 00:09:10.240 10435.348 - 10485.760: 76.6872% ( 35) 00:09:10.240 10485.760 - 10536.172: 76.9105% ( 34) 00:09:10.240 10536.172 - 10586.585: 77.1468% ( 36) 00:09:10.240 10586.585 - 10636.997: 77.3700% ( 34) 00:09:10.240 10636.997 - 10687.409: 77.6064% ( 36) 00:09:10.240 10687.409 - 10737.822: 77.8493% ( 37) 00:09:10.240 10737.822 - 10788.234: 78.0659% ( 33) 00:09:10.240 10788.234 - 10838.646: 78.2957% ( 35) 00:09:10.240 10838.646 - 10889.058: 78.5386% ( 37) 00:09:10.240 10889.058 - 10939.471: 78.8143% ( 42) 00:09:10.240 10939.471 - 10989.883: 79.0901% ( 42) 00:09:10.240 10989.883 - 11040.295: 79.3527% ( 40) 00:09:10.240 11040.295 - 11090.708: 79.6415% ( 44) 00:09:10.240 11090.708 - 11141.120: 79.9370% ( 45) 00:09:10.240 11141.120 - 11191.532: 80.2127% ( 42) 00:09:10.240 11191.532 - 11241.945: 80.5016% ( 44) 00:09:10.240 11241.945 - 11292.357: 80.7576% ( 39) 00:09:10.240 11292.357 - 11342.769: 81.0202% ( 40) 00:09:10.240 11342.769 - 11393.182: 81.2828% ( 40) 00:09:10.240 11393.182 - 11443.594: 81.5717% ( 44) 00:09:10.240 11443.594 - 11494.006: 81.8606% ( 44) 00:09:10.240 11494.006 - 11544.418: 82.0969% ( 36) 00:09:10.240 11544.418 - 11594.831: 82.3595% ( 40) 00:09:10.240 11594.831 - 11645.243: 82.6024% ( 37) 00:09:10.240 11645.243 - 11695.655: 82.8388% ( 36) 00:09:10.240 11695.655 - 11746.068: 83.0226% ( 28) 00:09:10.240 11746.068 - 11796.480: 83.1998% ( 27) 00:09:10.240 11796.480 - 11846.892: 83.3574% ( 24) 00:09:10.240 11846.892 - 11897.305: 83.5281% ( 26) 00:09:10.240 11897.305 - 11947.717: 83.7185% ( 29) 00:09:10.240 11947.717 - 11998.129: 83.8892% ( 26) 00:09:10.240 11998.129 - 12048.542: 84.0664% ( 27) 00:09:10.240 12048.542 - 12098.954: 84.2174% ( 23) 00:09:10.240 12098.954 - 12149.366: 84.3619% ( 22) 00:09:10.240 12149.366 - 12199.778: 84.5260% ( 25) 00:09:10.240 12199.778 - 12250.191: 84.6770% ( 23) 00:09:10.240 12250.191 - 12300.603: 84.8149% ( 21) 00:09:10.240 12300.603 - 12351.015: 84.9724% ( 24) 00:09:10.240 12351.015 - 12401.428: 85.1300% ( 24) 00:09:10.240 12401.428 - 12451.840: 85.3269% ( 30) 00:09:10.240 12451.840 - 12502.252: 85.5370% ( 32) 00:09:10.240 12502.252 - 12552.665: 85.7996% ( 40) 00:09:10.240 12552.665 - 12603.077: 85.9900% ( 29) 00:09:10.240 12603.077 - 12653.489: 86.2001% ( 32) 00:09:10.240 12653.489 - 12703.902: 86.4364% ( 36) 00:09:10.240 12703.902 - 12754.314: 86.6728% ( 36) 00:09:10.240 12754.314 - 12804.726: 86.8960% ( 34) 00:09:10.240 12804.726 - 12855.138: 87.1258% ( 35) 00:09:10.240 12855.138 - 12905.551: 87.3293% ( 31) 00:09:10.240 12905.551 - 13006.375: 87.7626% ( 66) 00:09:10.240 13006.375 - 13107.200: 88.2353% ( 72) 00:09:10.240 13107.200 - 13208.025: 88.7342% ( 76) 00:09:10.240 13208.025 - 13308.849: 89.2791% ( 83) 00:09:10.240 13308.849 - 13409.674: 89.7518% ( 72) 00:09:10.240 13409.674 - 13510.498: 90.2245% ( 72) 00:09:10.240 13510.498 - 13611.323: 90.7300% ( 77) 00:09:10.240 13611.323 - 13712.148: 91.2159% ( 74) 00:09:10.240 13712.148 - 13812.972: 91.6886% ( 72) 00:09:10.240 13812.972 - 13913.797: 92.1547% ( 71) 00:09:10.240 13913.797 - 14014.622: 92.6208% ( 71) 00:09:10.240 14014.622 - 14115.446: 93.0804% ( 70) 00:09:10.240 14115.446 - 14216.271: 93.5596% ( 73) 00:09:10.240 14216.271 - 14317.095: 93.9798% ( 64) 00:09:10.240 14317.095 - 14417.920: 94.3999% ( 64) 00:09:10.240 14417.920 - 14518.745: 94.8529% ( 69) 00:09:10.240 14518.745 - 14619.569: 95.3059% ( 69) 00:09:10.240 14619.569 - 14720.394: 95.7130% ( 62) 00:09:10.240 14720.394 - 14821.218: 96.1397% ( 65) 00:09:10.240 14821.218 - 14922.043: 96.4483% ( 47) 00:09:10.240 14922.043 - 15022.868: 96.7174% ( 41) 00:09:10.240 15022.868 - 15123.692: 96.9669% ( 38) 00:09:10.240 15123.692 - 15224.517: 97.2295% ( 40) 00:09:10.240 15224.517 - 15325.342: 97.4921% ( 40) 00:09:10.240 15325.342 - 15426.166: 97.7219% ( 35) 00:09:10.240 15426.166 - 15526.991: 97.9582% ( 36) 00:09:10.240 15526.991 - 15627.815: 98.1683% ( 32) 00:09:10.240 15627.815 - 15728.640: 98.3259% ( 24) 00:09:10.240 15728.640 - 15829.465: 98.4572% ( 20) 00:09:10.240 15829.465 - 15930.289: 98.5622% ( 16) 00:09:10.240 15930.289 - 16031.114: 98.6804% ( 18) 00:09:10.240 16031.114 - 16131.938: 98.7789% ( 15) 00:09:10.240 16131.938 - 16232.763: 98.8708% ( 14) 00:09:10.240 16232.763 - 16333.588: 98.9430% ( 11) 00:09:10.240 16333.588 - 16434.412: 99.0087% ( 10) 00:09:10.240 16434.412 - 16535.237: 99.0481% ( 6) 00:09:10.240 16535.237 - 16636.062: 99.0809% ( 5) 00:09:10.240 16636.062 - 16736.886: 99.1203% ( 6) 00:09:10.240 16736.886 - 16837.711: 99.1531% ( 5) 00:09:10.240 16837.711 - 16938.535: 99.1597% ( 1) 00:09:10.240 22181.415 - 22282.240: 99.1662% ( 1) 00:09:10.240 22282.240 - 22383.065: 99.1859% ( 3) 00:09:10.240 22383.065 - 22483.889: 99.2188% ( 5) 00:09:10.240 22483.889 - 22584.714: 99.2516% ( 5) 00:09:10.240 22584.714 - 22685.538: 99.2844% ( 5) 00:09:10.240 22685.538 - 22786.363: 99.3172% ( 5) 00:09:10.240 22786.363 - 22887.188: 99.3501% ( 5) 00:09:10.240 22887.188 - 22988.012: 99.3894% ( 6) 00:09:10.240 22988.012 - 23088.837: 99.4223% ( 5) 00:09:10.240 23088.837 - 23189.662: 99.4551% ( 5) 00:09:10.240 23189.662 - 23290.486: 99.4879% ( 5) 00:09:10.240 23290.486 - 23391.311: 99.5207% ( 5) 00:09:10.240 23391.311 - 23492.135: 99.5536% ( 5) 00:09:10.240 23492.135 - 23592.960: 99.5798% ( 4) 00:09:10.240 23592.960 - 23693.785: 99.6127% ( 5) 00:09:10.240 23693.785 - 23794.609: 99.6455% ( 5) 00:09:10.240 23794.609 - 23895.434: 99.6783% ( 5) 00:09:10.240 23895.434 - 23996.258: 99.7111% ( 5) 00:09:10.240 23996.258 - 24097.083: 99.7440% ( 5) 00:09:10.240 24097.083 - 24197.908: 99.7702% ( 4) 00:09:10.240 24197.908 - 24298.732: 99.8030% ( 5) 00:09:10.240 24298.732 - 24399.557: 99.8359% ( 5) 00:09:10.240 24399.557 - 24500.382: 99.8687% ( 5) 00:09:10.240 24500.382 - 24601.206: 99.9015% ( 5) 00:09:10.240 24601.206 - 24702.031: 99.9343% ( 5) 00:09:10.240 24702.031 - 24802.855: 99.9672% ( 5) 00:09:10.240 24802.855 - 24903.680: 100.0000% ( 5) 00:09:10.240 00:09:10.240 06:34:20 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:11.189 Initializing NVMe Controllers 00:09:11.189 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:11.189 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:11.189 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:11.189 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:11.189 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:11.189 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:11.189 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:11.189 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:11.189 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:11.189 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:11.189 Initialization complete. Launching workers. 00:09:11.189 ======================================================== 00:09:11.189 Latency(us) 00:09:11.189 Device Information : IOPS MiB/s Average min max 00:09:11.189 PCIE (0000:00:07.0) NSID 1 from core 0: 13885.25 162.72 9215.30 5606.37 20797.37 00:09:11.189 PCIE (0000:00:09.0) NSID 1 from core 0: 13885.25 162.72 9215.45 5809.04 21813.03 00:09:11.189 PCIE (0000:00:06.0) NSID 1 from core 0: 13885.25 162.72 9208.81 5188.56 23134.76 00:09:11.189 PCIE (0000:00:08.0) NSID 1 from core 0: 13885.25 162.72 9202.17 5936.40 23156.05 00:09:11.189 PCIE (0000:00:08.0) NSID 2 from core 0: 13885.25 162.72 9196.26 4881.36 23935.26 00:09:11.189 PCIE (0000:00:08.0) NSID 3 from core 0: 13885.25 162.72 9190.18 4392.52 23834.53 00:09:11.189 ======================================================== 00:09:11.189 Total : 83311.52 976.31 9204.70 4392.52 23935.26 00:09:11.189 00:09:11.189 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:11.189 ================================================================================= 00:09:11.189 1.00000% : 6654.425us 00:09:11.189 10.00000% : 8116.382us 00:09:11.189 25.00000% : 8570.092us 00:09:11.189 50.00000% : 9074.215us 00:09:11.189 75.00000% : 9578.338us 00:09:11.189 90.00000% : 10183.286us 00:09:11.189 95.00000% : 10939.471us 00:09:11.189 98.00000% : 13510.498us 00:09:11.189 99.00000% : 14518.745us 00:09:11.189 99.50000% : 19358.326us 00:09:11.189 99.90000% : 20366.572us 00:09:11.189 99.99000% : 20769.871us 00:09:11.189 99.99900% : 20870.695us 00:09:11.189 99.99990% : 20870.695us 00:09:11.189 99.99999% : 20870.695us 00:09:11.189 00:09:11.189 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:11.189 ================================================================================= 00:09:11.189 1.00000% : 6553.600us 00:09:11.189 10.00000% : 8166.794us 00:09:11.189 25.00000% : 8570.092us 00:09:11.189 50.00000% : 9023.803us 00:09:11.189 75.00000% : 9578.338us 00:09:11.189 90.00000% : 10183.286us 00:09:11.189 95.00000% : 10838.646us 00:09:11.189 98.00000% : 13510.498us 00:09:11.189 99.00000% : 14619.569us 00:09:11.189 99.50000% : 20064.098us 00:09:11.189 99.90000% : 21576.468us 00:09:11.189 99.99000% : 21878.942us 00:09:11.189 99.99900% : 21878.942us 00:09:11.189 99.99990% : 21878.942us 00:09:11.189 99.99999% : 21878.942us 00:09:11.189 00:09:11.189 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:11.189 ================================================================================= 00:09:11.189 1.00000% : 6503.188us 00:09:11.189 10.00000% : 7965.145us 00:09:11.189 25.00000% : 8418.855us 00:09:11.189 50.00000% : 9074.215us 00:09:11.189 75.00000% : 9729.575us 00:09:11.189 90.00000% : 10334.523us 00:09:11.189 95.00000% : 11241.945us 00:09:11.189 98.00000% : 13006.375us 00:09:11.189 99.00000% : 14619.569us 00:09:11.189 99.50000% : 21072.345us 00:09:11.189 99.90000% : 22786.363us 00:09:11.189 99.99000% : 23088.837us 00:09:11.189 99.99900% : 23189.662us 00:09:11.189 99.99990% : 23189.662us 00:09:11.189 99.99999% : 23189.662us 00:09:11.189 00:09:11.189 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:11.189 ================================================================================= 00:09:11.189 1.00000% : 6251.126us 00:09:11.189 10.00000% : 8116.382us 00:09:11.189 25.00000% : 8519.680us 00:09:11.189 50.00000% : 9023.803us 00:09:11.189 75.00000% : 9578.338us 00:09:11.189 90.00000% : 10183.286us 00:09:11.189 95.00000% : 11040.295us 00:09:11.189 98.00000% : 13409.674us 00:09:11.189 99.00000% : 14518.745us 00:09:11.189 99.50000% : 22181.415us 00:09:11.189 99.90000% : 22786.363us 00:09:11.189 99.99000% : 23189.662us 00:09:11.189 99.99900% : 23189.662us 00:09:11.189 99.99990% : 23189.662us 00:09:11.189 99.99999% : 23189.662us 00:09:11.189 00:09:11.189 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:11.189 ================================================================================= 00:09:11.189 1.00000% : 5948.652us 00:09:11.189 10.00000% : 8065.969us 00:09:11.189 25.00000% : 8519.680us 00:09:11.189 50.00000% : 9023.803us 00:09:11.189 75.00000% : 9578.338us 00:09:11.189 90.00000% : 10183.286us 00:09:11.189 95.00000% : 11090.708us 00:09:11.189 98.00000% : 13611.323us 00:09:11.189 99.00000% : 15325.342us 00:09:11.189 99.50000% : 23088.837us 00:09:11.189 99.90000% : 23794.609us 00:09:11.189 99.99000% : 23996.258us 00:09:11.189 99.99900% : 23996.258us 00:09:11.189 99.99990% : 23996.258us 00:09:11.189 99.99999% : 23996.258us 00:09:11.189 00:09:11.189 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:11.189 ================================================================================= 00:09:11.189 1.00000% : 6099.889us 00:09:11.189 10.00000% : 8065.969us 00:09:11.189 25.00000% : 8519.680us 00:09:11.189 50.00000% : 9023.803us 00:09:11.189 75.00000% : 9578.338us 00:09:11.189 90.00000% : 10233.698us 00:09:11.189 95.00000% : 10889.058us 00:09:11.189 98.00000% : 13510.498us 00:09:11.189 99.00000% : 15123.692us 00:09:11.189 99.50000% : 22685.538us 00:09:11.189 99.90000% : 23693.785us 00:09:11.189 99.99000% : 23794.609us 00:09:11.189 99.99900% : 23895.434us 00:09:11.189 99.99990% : 23895.434us 00:09:11.189 99.99999% : 23895.434us 00:09:11.189 00:09:11.189 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:11.189 ============================================================================== 00:09:11.189 Range in us Cumulative IO count 00:09:11.189 5595.766 - 5620.972: 0.0072% ( 1) 00:09:11.189 5822.622 - 5847.828: 0.0215% ( 2) 00:09:11.189 5847.828 - 5873.034: 0.0358% ( 2) 00:09:11.189 5873.034 - 5898.240: 0.0573% ( 3) 00:09:11.189 5898.240 - 5923.446: 0.0717% ( 2) 00:09:11.189 5923.446 - 5948.652: 0.0932% ( 3) 00:09:11.189 5948.652 - 5973.858: 0.1075% ( 2) 00:09:11.189 5973.858 - 5999.065: 0.1362% ( 4) 00:09:11.189 5999.065 - 6024.271: 0.1577% ( 3) 00:09:11.189 6024.271 - 6049.477: 0.1792% ( 3) 00:09:11.189 6049.477 - 6074.683: 0.2007% ( 3) 00:09:11.189 6074.683 - 6099.889: 0.2222% ( 3) 00:09:11.189 6099.889 - 6125.095: 0.2437% ( 3) 00:09:11.189 6125.095 - 6150.302: 0.2652% ( 3) 00:09:11.189 6150.302 - 6175.508: 0.2795% ( 2) 00:09:11.189 6175.508 - 6200.714: 0.3010% ( 3) 00:09:11.189 6200.714 - 6225.920: 0.3584% ( 8) 00:09:11.189 6225.920 - 6251.126: 0.4014% ( 6) 00:09:11.189 6251.126 - 6276.332: 0.5949% ( 27) 00:09:11.189 6276.332 - 6301.538: 0.6307% ( 5) 00:09:11.189 6301.538 - 6326.745: 0.6737% ( 6) 00:09:11.189 6326.745 - 6351.951: 0.7024% ( 4) 00:09:11.189 6351.951 - 6377.157: 0.7167% ( 2) 00:09:11.189 6377.157 - 6402.363: 0.7311% ( 2) 00:09:11.189 6402.363 - 6427.569: 0.7454% ( 2) 00:09:11.189 6427.569 - 6452.775: 0.7669% ( 3) 00:09:11.189 6452.775 - 6503.188: 0.8314% ( 9) 00:09:11.189 6503.188 - 6553.600: 0.8888% ( 8) 00:09:11.189 6553.600 - 6604.012: 0.9461% ( 8) 00:09:11.189 6604.012 - 6654.425: 1.0106% ( 9) 00:09:11.189 6654.425 - 6704.837: 1.0679% ( 8) 00:09:11.189 6704.837 - 6755.249: 1.1325% ( 9) 00:09:11.189 6755.249 - 6805.662: 1.1898% ( 8) 00:09:11.189 6805.662 - 6856.074: 1.2543% ( 9) 00:09:11.189 6856.074 - 6906.486: 1.3116% ( 8) 00:09:11.189 6906.486 - 6956.898: 1.3546% ( 6) 00:09:11.189 6956.898 - 7007.311: 1.4335% ( 11) 00:09:11.189 7007.311 - 7057.723: 1.5410% ( 15) 00:09:11.189 7057.723 - 7108.135: 1.6342% ( 13) 00:09:11.189 7108.135 - 7158.548: 1.7345% ( 14) 00:09:11.189 7158.548 - 7208.960: 1.8349% ( 14) 00:09:11.189 7208.960 - 7259.372: 1.9495% ( 16) 00:09:11.189 7259.372 - 7309.785: 2.0929% ( 20) 00:09:11.189 7309.785 - 7360.197: 2.2721% ( 25) 00:09:11.189 7360.197 - 7410.609: 2.4871% ( 30) 00:09:11.189 7410.609 - 7461.022: 2.7451% ( 36) 00:09:11.190 7461.022 - 7511.434: 2.9960% ( 35) 00:09:11.190 7511.434 - 7561.846: 3.3042% ( 43) 00:09:11.190 7561.846 - 7612.258: 3.5981% ( 41) 00:09:11.190 7612.258 - 7662.671: 3.9134% ( 44) 00:09:11.190 7662.671 - 7713.083: 4.2718% ( 50) 00:09:11.190 7713.083 - 7763.495: 4.7305% ( 64) 00:09:11.190 7763.495 - 7813.908: 5.2537% ( 73) 00:09:11.190 7813.908 - 7864.320: 6.1783% ( 129) 00:09:11.190 7864.320 - 7914.732: 6.9811% ( 112) 00:09:11.190 7914.732 - 7965.145: 7.8770% ( 125) 00:09:11.190 7965.145 - 8015.557: 8.8016% ( 129) 00:09:11.190 8015.557 - 8065.969: 9.8552% ( 147) 00:09:11.190 8065.969 - 8116.382: 10.8587% ( 140) 00:09:11.190 8116.382 - 8166.794: 11.8334% ( 136) 00:09:11.190 8166.794 - 8217.206: 12.9874% ( 161) 00:09:11.190 8217.206 - 8267.618: 14.3134% ( 185) 00:09:11.190 8267.618 - 8318.031: 15.8329% ( 212) 00:09:11.190 8318.031 - 8368.443: 17.4670% ( 228) 00:09:11.190 8368.443 - 8418.855: 19.3306% ( 260) 00:09:11.190 8418.855 - 8469.268: 21.5166% ( 305) 00:09:11.190 8469.268 - 8519.680: 23.7242% ( 308) 00:09:11.190 8519.680 - 8570.092: 26.1325% ( 336) 00:09:11.190 8570.092 - 8620.505: 28.5335% ( 335) 00:09:11.190 8620.505 - 8670.917: 30.9776% ( 341) 00:09:11.190 8670.917 - 8721.329: 33.3859% ( 336) 00:09:11.190 8721.329 - 8771.742: 35.9232% ( 354) 00:09:11.190 8771.742 - 8822.154: 38.5536% ( 367) 00:09:11.190 8822.154 - 8872.566: 41.5783% ( 422) 00:09:11.190 8872.566 - 8922.978: 44.2374% ( 371) 00:09:11.190 8922.978 - 8973.391: 46.9968% ( 385) 00:09:11.190 8973.391 - 9023.803: 49.9427% ( 411) 00:09:11.190 9023.803 - 9074.215: 52.8025% ( 399) 00:09:11.190 9074.215 - 9124.628: 55.4688% ( 372) 00:09:11.190 9124.628 - 9175.040: 57.9702% ( 349) 00:09:11.190 9175.040 - 9225.452: 60.3856% ( 337) 00:09:11.190 9225.452 - 9275.865: 62.5430% ( 301) 00:09:11.190 9275.865 - 9326.277: 64.8581% ( 323) 00:09:11.190 9326.277 - 9376.689: 67.1732% ( 323) 00:09:11.190 9376.689 - 9427.102: 69.5599% ( 333) 00:09:11.190 9427.102 - 9477.514: 71.6456% ( 291) 00:09:11.190 9477.514 - 9527.926: 73.4375% ( 250) 00:09:11.190 9527.926 - 9578.338: 75.2007% ( 246) 00:09:11.190 9578.338 - 9628.751: 76.9209% ( 240) 00:09:11.190 9628.751 - 9679.163: 78.4905% ( 219) 00:09:11.190 9679.163 - 9729.575: 80.1964% ( 238) 00:09:11.190 9729.575 - 9779.988: 81.6299% ( 200) 00:09:11.190 9779.988 - 9830.400: 82.9989% ( 191) 00:09:11.190 9830.400 - 9880.812: 84.3320% ( 186) 00:09:11.190 9880.812 - 9931.225: 85.4860% ( 161) 00:09:11.190 9931.225 - 9981.637: 86.6399% ( 161) 00:09:11.190 9981.637 - 10032.049: 87.8010% ( 162) 00:09:11.190 10032.049 - 10082.462: 88.5823% ( 109) 00:09:11.190 10082.462 - 10132.874: 89.3277% ( 104) 00:09:11.190 10132.874 - 10183.286: 90.0229% ( 97) 00:09:11.190 10183.286 - 10233.698: 90.6608% ( 89) 00:09:11.190 10233.698 - 10284.111: 91.2916% ( 88) 00:09:11.190 10284.111 - 10334.523: 91.8220% ( 74) 00:09:11.190 10334.523 - 10384.935: 92.2663% ( 62) 00:09:11.190 10384.935 - 10435.348: 92.7251% ( 64) 00:09:11.190 10435.348 - 10485.760: 93.1264% ( 56) 00:09:11.190 10485.760 - 10536.172: 93.4848% ( 50) 00:09:11.190 10536.172 - 10586.585: 93.7858% ( 42) 00:09:11.190 10586.585 - 10636.997: 94.0725% ( 40) 00:09:11.190 10636.997 - 10687.409: 94.2876% ( 30) 00:09:11.190 10687.409 - 10737.822: 94.4667% ( 25) 00:09:11.190 10737.822 - 10788.234: 94.6388% ( 24) 00:09:11.190 10788.234 - 10838.646: 94.8179% ( 25) 00:09:11.190 10838.646 - 10889.058: 94.9828% ( 23) 00:09:11.190 10889.058 - 10939.471: 95.1692% ( 26) 00:09:11.190 10939.471 - 10989.883: 95.2767% ( 15) 00:09:11.190 10989.883 - 11040.295: 95.3627% ( 12) 00:09:11.190 11040.295 - 11090.708: 95.4558% ( 13) 00:09:11.190 11090.708 - 11141.120: 95.5204% ( 9) 00:09:11.190 11141.120 - 11191.532: 95.5849% ( 9) 00:09:11.190 11191.532 - 11241.945: 95.6350% ( 7) 00:09:11.190 11241.945 - 11292.357: 95.6709% ( 5) 00:09:11.190 11292.357 - 11342.769: 95.7139% ( 6) 00:09:11.190 11342.769 - 11393.182: 95.7784% ( 9) 00:09:11.190 11393.182 - 11443.594: 95.8859% ( 15) 00:09:11.190 11443.594 - 11494.006: 95.9576% ( 10) 00:09:11.190 11494.006 - 11544.418: 96.0221% ( 9) 00:09:11.190 11544.418 - 11594.831: 96.0722% ( 7) 00:09:11.190 11594.831 - 11645.243: 96.1296% ( 8) 00:09:11.190 11645.243 - 11695.655: 96.1726% ( 6) 00:09:11.190 11695.655 - 11746.068: 96.2013% ( 4) 00:09:11.190 11746.068 - 11796.480: 96.2228% ( 3) 00:09:11.190 11796.480 - 11846.892: 96.2443% ( 3) 00:09:11.190 11846.892 - 11897.305: 96.2729% ( 4) 00:09:11.190 11897.305 - 11947.717: 96.3088% ( 5) 00:09:11.190 11947.717 - 11998.129: 96.3661% ( 8) 00:09:11.190 11998.129 - 12048.542: 96.4235% ( 8) 00:09:11.190 12048.542 - 12098.954: 96.4951% ( 10) 00:09:11.190 12098.954 - 12149.366: 96.5596% ( 9) 00:09:11.190 12149.366 - 12199.778: 96.6170% ( 8) 00:09:11.190 12199.778 - 12250.191: 96.6886% ( 10) 00:09:11.190 12250.191 - 12300.603: 96.7460% ( 8) 00:09:11.190 12300.603 - 12351.015: 96.8033% ( 8) 00:09:11.190 12351.015 - 12401.428: 96.8750% ( 10) 00:09:11.190 12401.428 - 12451.840: 96.9395% ( 9) 00:09:11.190 12451.840 - 12502.252: 96.9968% ( 8) 00:09:11.190 12502.252 - 12552.665: 97.0757% ( 11) 00:09:11.190 12552.665 - 12603.077: 97.1259% ( 7) 00:09:11.190 12603.077 - 12653.489: 97.1689% ( 6) 00:09:11.190 12653.489 - 12703.902: 97.2190% ( 7) 00:09:11.190 12703.902 - 12754.314: 97.2692% ( 7) 00:09:11.190 12754.314 - 12804.726: 97.3122% ( 6) 00:09:11.190 12804.726 - 12855.138: 97.3624% ( 7) 00:09:11.190 12855.138 - 12905.551: 97.3982% ( 5) 00:09:11.190 12905.551 - 13006.375: 97.4699% ( 10) 00:09:11.190 13006.375 - 13107.200: 97.5774% ( 15) 00:09:11.190 13107.200 - 13208.025: 97.6921% ( 16) 00:09:11.190 13208.025 - 13308.849: 97.7924% ( 14) 00:09:11.190 13308.849 - 13409.674: 97.9573% ( 23) 00:09:11.190 13409.674 - 13510.498: 98.1580% ( 28) 00:09:11.190 13510.498 - 13611.323: 98.2942% ( 19) 00:09:11.190 13611.323 - 13712.148: 98.3802% ( 12) 00:09:11.190 13712.148 - 13812.972: 98.5450% ( 23) 00:09:11.190 13812.972 - 13913.797: 98.6167% ( 10) 00:09:11.190 13913.797 - 14014.622: 98.6812% ( 9) 00:09:11.190 14014.622 - 14115.446: 98.7529% ( 10) 00:09:11.190 14115.446 - 14216.271: 98.8174% ( 9) 00:09:11.190 14216.271 - 14317.095: 98.8962% ( 11) 00:09:11.190 14317.095 - 14417.920: 98.9679% ( 10) 00:09:11.190 14417.920 - 14518.745: 99.0467% ( 11) 00:09:11.190 14518.745 - 14619.569: 99.0826% ( 5) 00:09:11.190 17946.782 - 18047.606: 99.1041% ( 3) 00:09:11.190 18047.606 - 18148.431: 99.1256% ( 3) 00:09:11.190 18148.431 - 18249.255: 99.1614% ( 5) 00:09:11.190 18249.255 - 18350.080: 99.2044% ( 6) 00:09:11.190 18350.080 - 18450.905: 99.2331% ( 4) 00:09:11.190 18450.905 - 18551.729: 99.2618% ( 4) 00:09:11.190 18551.729 - 18652.554: 99.2904% ( 4) 00:09:11.190 18652.554 - 18753.378: 99.3263% ( 5) 00:09:11.190 18753.378 - 18854.203: 99.3621% ( 5) 00:09:11.190 18854.203 - 18955.028: 99.3836% ( 3) 00:09:11.190 18955.028 - 19055.852: 99.4266% ( 6) 00:09:11.190 19055.852 - 19156.677: 99.4553% ( 4) 00:09:11.190 19156.677 - 19257.502: 99.4983% ( 6) 00:09:11.190 19257.502 - 19358.326: 99.5269% ( 4) 00:09:11.190 19358.326 - 19459.151: 99.5628% ( 5) 00:09:11.190 19459.151 - 19559.975: 99.6058% ( 6) 00:09:11.190 19559.975 - 19660.800: 99.6918% ( 12) 00:09:11.190 19660.800 - 19761.625: 99.7205% ( 4) 00:09:11.190 19761.625 - 19862.449: 99.7420% ( 3) 00:09:11.190 19862.449 - 19963.274: 99.7850% ( 6) 00:09:11.190 19963.274 - 20064.098: 99.8208% ( 5) 00:09:11.190 20064.098 - 20164.923: 99.8495% ( 4) 00:09:11.190 20164.923 - 20265.748: 99.8782% ( 4) 00:09:11.190 20265.748 - 20366.572: 99.9068% ( 4) 00:09:11.190 20366.572 - 20467.397: 99.9283% ( 3) 00:09:11.190 20467.397 - 20568.222: 99.9498% ( 3) 00:09:11.190 20568.222 - 20669.046: 99.9713% ( 3) 00:09:11.190 20669.046 - 20769.871: 99.9928% ( 3) 00:09:11.190 20769.871 - 20870.695: 100.0000% ( 1) 00:09:11.190 00:09:11.190 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:11.190 ============================================================================== 00:09:11.190 Range in us Cumulative IO count 00:09:11.190 5797.415 - 5822.622: 0.0143% ( 2) 00:09:11.190 5822.622 - 5847.828: 0.0358% ( 3) 00:09:11.190 5847.828 - 5873.034: 0.0502% ( 2) 00:09:11.190 5873.034 - 5898.240: 0.0717% ( 3) 00:09:11.190 5898.240 - 5923.446: 0.0860% ( 2) 00:09:11.190 5923.446 - 5948.652: 0.1147% ( 4) 00:09:11.190 5948.652 - 5973.858: 0.1290% ( 2) 00:09:11.190 5973.858 - 5999.065: 0.1577% ( 4) 00:09:11.190 5999.065 - 6024.271: 0.1720% ( 2) 00:09:11.190 6024.271 - 6049.477: 0.1935% ( 3) 00:09:11.190 6049.477 - 6074.683: 0.2079% ( 2) 00:09:11.190 6074.683 - 6099.889: 0.2365% ( 4) 00:09:11.190 6099.889 - 6125.095: 0.2437% ( 1) 00:09:11.190 6125.095 - 6150.302: 0.2724% ( 4) 00:09:11.190 6150.302 - 6175.508: 0.2939% ( 3) 00:09:11.190 6175.508 - 6200.714: 0.3369% ( 6) 00:09:11.190 6200.714 - 6225.920: 0.3655% ( 4) 00:09:11.190 6225.920 - 6251.126: 0.4444% ( 11) 00:09:11.190 6251.126 - 6276.332: 0.6952% ( 35) 00:09:11.190 6276.332 - 6301.538: 0.7382% ( 6) 00:09:11.190 6301.538 - 6326.745: 0.7669% ( 4) 00:09:11.190 6326.745 - 6351.951: 0.7884% ( 3) 00:09:11.190 6351.951 - 6377.157: 0.8171% ( 4) 00:09:11.190 6377.157 - 6402.363: 0.8386% ( 3) 00:09:11.190 6402.363 - 6427.569: 0.8744% ( 5) 00:09:11.190 6427.569 - 6452.775: 0.8959% ( 3) 00:09:11.190 6452.775 - 6503.188: 0.9533% ( 8) 00:09:11.190 6503.188 - 6553.600: 1.0178% ( 9) 00:09:11.190 6553.600 - 6604.012: 1.1038% ( 12) 00:09:11.190 6604.012 - 6654.425: 1.2041% ( 14) 00:09:11.190 6654.425 - 6704.837: 1.4550% ( 35) 00:09:11.191 6704.837 - 6755.249: 1.5195% ( 9) 00:09:11.191 6755.249 - 6805.662: 1.5697% ( 7) 00:09:11.191 6805.662 - 6856.074: 1.6198% ( 7) 00:09:11.191 6856.074 - 6906.486: 1.6987% ( 11) 00:09:11.191 6906.486 - 6956.898: 1.7704% ( 10) 00:09:11.191 6956.898 - 7007.311: 1.8564% ( 12) 00:09:11.191 7007.311 - 7057.723: 1.9280% ( 10) 00:09:11.191 7057.723 - 7108.135: 2.0140% ( 12) 00:09:11.191 7108.135 - 7158.548: 2.1144% ( 14) 00:09:11.191 7158.548 - 7208.960: 2.2076% ( 13) 00:09:11.191 7208.960 - 7259.372: 2.3079% ( 14) 00:09:11.191 7259.372 - 7309.785: 2.3939% ( 12) 00:09:11.191 7309.785 - 7360.197: 2.4943% ( 14) 00:09:11.191 7360.197 - 7410.609: 2.6448% ( 21) 00:09:11.191 7410.609 - 7461.022: 2.9458% ( 42) 00:09:11.191 7461.022 - 7511.434: 3.1178% ( 24) 00:09:11.191 7511.434 - 7561.846: 3.3472% ( 32) 00:09:11.191 7561.846 - 7612.258: 3.5550% ( 29) 00:09:11.191 7612.258 - 7662.671: 3.8274% ( 38) 00:09:11.191 7662.671 - 7713.083: 4.3076% ( 67) 00:09:11.191 7713.083 - 7763.495: 4.5513% ( 34) 00:09:11.191 7763.495 - 7813.908: 4.8452% ( 41) 00:09:11.191 7813.908 - 7864.320: 5.1964% ( 49) 00:09:11.191 7864.320 - 7914.732: 5.6336% ( 61) 00:09:11.191 7914.732 - 7965.145: 6.1640% ( 74) 00:09:11.191 7965.145 - 8015.557: 6.9381% ( 108) 00:09:11.191 8015.557 - 8065.969: 8.0849% ( 160) 00:09:11.191 8065.969 - 8116.382: 9.3248% ( 173) 00:09:11.191 8116.382 - 8166.794: 10.6365% ( 183) 00:09:11.191 8166.794 - 8217.206: 12.3853% ( 244) 00:09:11.191 8217.206 - 8267.618: 13.9263% ( 215) 00:09:11.191 8267.618 - 8318.031: 15.6035% ( 234) 00:09:11.191 8318.031 - 8368.443: 17.5674% ( 274) 00:09:11.191 8368.443 - 8418.855: 19.7319% ( 302) 00:09:11.191 8418.855 - 8469.268: 22.0399% ( 322) 00:09:11.191 8469.268 - 8519.680: 24.2761% ( 312) 00:09:11.191 8519.680 - 8570.092: 27.0069% ( 381) 00:09:11.191 8570.092 - 8620.505: 29.9097% ( 405) 00:09:11.191 8620.505 - 8670.917: 32.5258% ( 365) 00:09:11.191 8670.917 - 8721.329: 35.0774% ( 356) 00:09:11.191 8721.329 - 8771.742: 37.4785% ( 335) 00:09:11.191 8771.742 - 8822.154: 39.9298% ( 342) 00:09:11.191 8822.154 - 8872.566: 42.6247% ( 376) 00:09:11.191 8872.566 - 8922.978: 45.1978% ( 359) 00:09:11.191 8922.978 - 8973.391: 47.7638% ( 358) 00:09:11.191 8973.391 - 9023.803: 50.1792% ( 337) 00:09:11.191 9023.803 - 9074.215: 52.6519% ( 345) 00:09:11.191 9074.215 - 9124.628: 55.0387% ( 333) 00:09:11.191 9124.628 - 9175.040: 57.5330% ( 348) 00:09:11.191 9175.040 - 9225.452: 60.1204% ( 361) 00:09:11.191 9225.452 - 9275.865: 62.3136% ( 306) 00:09:11.191 9275.865 - 9326.277: 64.4495% ( 298) 00:09:11.191 9326.277 - 9376.689: 66.6213% ( 303) 00:09:11.191 9376.689 - 9427.102: 68.9579% ( 326) 00:09:11.191 9427.102 - 9477.514: 71.1224% ( 302) 00:09:11.191 9477.514 - 9527.926: 73.2368% ( 295) 00:09:11.191 9527.926 - 9578.338: 75.1075% ( 261) 00:09:11.191 9578.338 - 9628.751: 77.0140% ( 266) 00:09:11.191 9628.751 - 9679.163: 78.6912% ( 234) 00:09:11.191 9679.163 - 9729.575: 80.2824% ( 222) 00:09:11.191 9729.575 - 9779.988: 81.7087% ( 199) 00:09:11.191 9779.988 - 9830.400: 83.1709% ( 204) 00:09:11.191 9830.400 - 9880.812: 84.4252% ( 175) 00:09:11.191 9880.812 - 9931.225: 85.6293% ( 168) 00:09:11.191 9931.225 - 9981.637: 86.8048% ( 164) 00:09:11.191 9981.637 - 10032.049: 87.9229% ( 156) 00:09:11.191 10032.049 - 10082.462: 88.8690% ( 132) 00:09:11.191 10082.462 - 10132.874: 89.7076% ( 117) 00:09:11.191 10132.874 - 10183.286: 90.4458% ( 103) 00:09:11.191 10183.286 - 10233.698: 91.1339% ( 96) 00:09:11.191 10233.698 - 10284.111: 91.7216% ( 82) 00:09:11.191 10284.111 - 10334.523: 92.2448% ( 73) 00:09:11.191 10334.523 - 10384.935: 92.5745% ( 46) 00:09:11.191 10384.935 - 10435.348: 92.9186% ( 48) 00:09:11.191 10435.348 - 10485.760: 93.2698% ( 49) 00:09:11.191 10485.760 - 10536.172: 93.5636% ( 41) 00:09:11.191 10536.172 - 10586.585: 93.8647% ( 42) 00:09:11.191 10586.585 - 10636.997: 94.1227% ( 36) 00:09:11.191 10636.997 - 10687.409: 94.4237% ( 42) 00:09:11.191 10687.409 - 10737.822: 94.7606% ( 47) 00:09:11.191 10737.822 - 10788.234: 94.9900% ( 32) 00:09:11.191 10788.234 - 10838.646: 95.1692% ( 25) 00:09:11.191 10838.646 - 10889.058: 95.2982% ( 18) 00:09:11.191 10889.058 - 10939.471: 95.4128% ( 16) 00:09:11.191 10939.471 - 10989.883: 95.5275% ( 16) 00:09:11.191 10989.883 - 11040.295: 95.6350% ( 15) 00:09:11.191 11040.295 - 11090.708: 95.7425% ( 15) 00:09:11.191 11090.708 - 11141.120: 95.8501% ( 15) 00:09:11.191 11141.120 - 11191.532: 95.9432% ( 13) 00:09:11.191 11191.532 - 11241.945: 96.0221% ( 11) 00:09:11.191 11241.945 - 11292.357: 96.0866% ( 9) 00:09:11.191 11292.357 - 11342.769: 96.1296% ( 6) 00:09:11.191 11342.769 - 11393.182: 96.1869% ( 8) 00:09:11.191 11393.182 - 11443.594: 96.2299% ( 6) 00:09:11.191 11443.594 - 11494.006: 96.2658% ( 5) 00:09:11.191 11494.006 - 11544.418: 96.3088% ( 6) 00:09:11.191 11544.418 - 11594.831: 96.3518% ( 6) 00:09:11.191 11594.831 - 11645.243: 96.3948% ( 6) 00:09:11.191 11645.243 - 11695.655: 96.4306% ( 5) 00:09:11.191 11695.655 - 11746.068: 96.4736% ( 6) 00:09:11.191 11746.068 - 11796.480: 96.5166% ( 6) 00:09:11.191 11796.480 - 11846.892: 96.5596% ( 6) 00:09:11.191 11846.892 - 11897.305: 96.6026% ( 6) 00:09:11.191 11897.305 - 11947.717: 96.6385% ( 5) 00:09:11.191 11947.717 - 11998.129: 96.6886% ( 7) 00:09:11.191 11998.129 - 12048.542: 96.7317% ( 6) 00:09:11.191 12048.542 - 12098.954: 96.7747% ( 6) 00:09:11.191 12098.954 - 12149.366: 96.8033% ( 4) 00:09:11.191 12149.366 - 12199.778: 96.8392% ( 5) 00:09:11.191 12199.778 - 12250.191: 96.8607% ( 3) 00:09:11.191 12250.191 - 12300.603: 96.8965% ( 5) 00:09:11.191 12300.603 - 12351.015: 96.9252% ( 4) 00:09:11.191 12351.015 - 12401.428: 96.9467% ( 3) 00:09:11.191 12401.428 - 12451.840: 96.9753% ( 4) 00:09:11.191 12451.840 - 12502.252: 96.9897% ( 2) 00:09:11.191 12502.252 - 12552.665: 97.0040% ( 2) 00:09:11.191 12552.665 - 12603.077: 97.0183% ( 2) 00:09:11.191 12603.077 - 12653.489: 97.0327% ( 2) 00:09:11.191 12653.489 - 12703.902: 97.0542% ( 3) 00:09:11.191 12703.902 - 12754.314: 97.0685% ( 2) 00:09:11.191 12754.314 - 12804.726: 97.0829% ( 2) 00:09:11.191 12804.726 - 12855.138: 97.0972% ( 2) 00:09:11.191 12855.138 - 12905.551: 97.1402% ( 6) 00:09:11.191 12905.551 - 13006.375: 97.2405% ( 14) 00:09:11.191 13006.375 - 13107.200: 97.3337% ( 13) 00:09:11.191 13107.200 - 13208.025: 97.4341% ( 14) 00:09:11.191 13208.025 - 13308.849: 97.5487% ( 16) 00:09:11.191 13308.849 - 13409.674: 97.7709% ( 31) 00:09:11.191 13409.674 - 13510.498: 98.1866% ( 58) 00:09:11.191 13510.498 - 13611.323: 98.2870% ( 14) 00:09:11.191 13611.323 - 13712.148: 98.3873% ( 14) 00:09:11.191 13712.148 - 13812.972: 98.4805% ( 13) 00:09:11.191 13812.972 - 13913.797: 98.6239% ( 20) 00:09:11.191 13913.797 - 14014.622: 98.7744% ( 21) 00:09:11.191 14014.622 - 14115.446: 98.8174% ( 6) 00:09:11.191 14115.446 - 14216.271: 98.8604% ( 6) 00:09:11.191 14216.271 - 14317.095: 98.9034% ( 6) 00:09:11.191 14317.095 - 14417.920: 98.9536% ( 7) 00:09:11.191 14417.920 - 14518.745: 98.9966% ( 6) 00:09:11.191 14518.745 - 14619.569: 99.0396% ( 6) 00:09:11.191 14619.569 - 14720.394: 99.0754% ( 5) 00:09:11.191 14720.394 - 14821.218: 99.0826% ( 1) 00:09:11.191 18551.729 - 18652.554: 99.1112% ( 4) 00:09:11.191 18652.554 - 18753.378: 99.1399% ( 4) 00:09:11.191 18753.378 - 18854.203: 99.1686% ( 4) 00:09:11.191 18854.203 - 18955.028: 99.2044% ( 5) 00:09:11.191 18955.028 - 19055.852: 99.2331% ( 4) 00:09:11.191 19055.852 - 19156.677: 99.2618% ( 4) 00:09:11.191 19156.677 - 19257.502: 99.2904% ( 4) 00:09:11.191 19257.502 - 19358.326: 99.3191% ( 4) 00:09:11.191 19358.326 - 19459.151: 99.3406% ( 3) 00:09:11.191 19459.151 - 19559.975: 99.3693% ( 4) 00:09:11.191 19559.975 - 19660.800: 99.3979% ( 4) 00:09:11.191 19660.800 - 19761.625: 99.4266% ( 4) 00:09:11.191 19761.625 - 19862.449: 99.4553% ( 4) 00:09:11.191 19862.449 - 19963.274: 99.4839% ( 4) 00:09:11.191 19963.274 - 20064.098: 99.5126% ( 4) 00:09:11.191 20064.098 - 20164.923: 99.5413% ( 4) 00:09:11.191 20164.923 - 20265.748: 99.5700% ( 4) 00:09:11.191 20265.748 - 20366.572: 99.5986% ( 4) 00:09:11.191 20366.572 - 20467.397: 99.6273% ( 4) 00:09:11.191 20467.397 - 20568.222: 99.6560% ( 4) 00:09:11.191 20568.222 - 20669.046: 99.6846% ( 4) 00:09:11.191 20669.046 - 20769.871: 99.7133% ( 4) 00:09:11.191 20769.871 - 20870.695: 99.7420% ( 4) 00:09:11.191 20870.695 - 20971.520: 99.7635% ( 3) 00:09:11.191 20971.520 - 21072.345: 99.7850% ( 3) 00:09:11.191 21072.345 - 21173.169: 99.8136% ( 4) 00:09:11.191 21173.169 - 21273.994: 99.8423% ( 4) 00:09:11.191 21273.994 - 21374.818: 99.8710% ( 4) 00:09:11.191 21374.818 - 21475.643: 99.8997% ( 4) 00:09:11.191 21475.643 - 21576.468: 99.9355% ( 5) 00:09:11.191 21576.468 - 21677.292: 99.9570% ( 3) 00:09:11.191 21677.292 - 21778.117: 99.9857% ( 4) 00:09:11.191 21778.117 - 21878.942: 100.0000% ( 2) 00:09:11.191 00:09:11.191 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:11.191 ============================================================================== 00:09:11.191 Range in us Cumulative IO count 00:09:11.191 5167.262 - 5192.468: 0.0072% ( 1) 00:09:11.191 5192.468 - 5217.674: 0.0143% ( 1) 00:09:11.191 5217.674 - 5242.880: 0.0215% ( 1) 00:09:11.191 5242.880 - 5268.086: 0.0358% ( 2) 00:09:11.191 5268.086 - 5293.292: 0.0430% ( 1) 00:09:11.191 5293.292 - 5318.498: 0.0573% ( 2) 00:09:11.191 5318.498 - 5343.705: 0.0645% ( 1) 00:09:11.191 5343.705 - 5368.911: 0.0788% ( 2) 00:09:11.191 5368.911 - 5394.117: 0.0860% ( 1) 00:09:11.191 5394.117 - 5419.323: 0.0932% ( 1) 00:09:11.191 5419.323 - 5444.529: 0.1075% ( 2) 00:09:11.192 5444.529 - 5469.735: 0.1147% ( 1) 00:09:11.192 5469.735 - 5494.942: 0.1218% ( 1) 00:09:11.192 5494.942 - 5520.148: 0.1290% ( 1) 00:09:11.192 5520.148 - 5545.354: 0.1433% ( 2) 00:09:11.192 5545.354 - 5570.560: 0.1505% ( 1) 00:09:11.192 5570.560 - 5595.766: 0.1577% ( 1) 00:09:11.192 5595.766 - 5620.972: 0.1649% ( 1) 00:09:11.192 5620.972 - 5646.178: 0.1720% ( 1) 00:09:11.192 5646.178 - 5671.385: 0.1792% ( 1) 00:09:11.192 5671.385 - 5696.591: 0.1864% ( 1) 00:09:11.192 5696.591 - 5721.797: 0.1935% ( 1) 00:09:11.192 5721.797 - 5747.003: 0.2007% ( 1) 00:09:11.192 5747.003 - 5772.209: 0.2079% ( 1) 00:09:11.192 5797.415 - 5822.622: 0.2222% ( 2) 00:09:11.192 5822.622 - 5847.828: 0.2294% ( 1) 00:09:11.192 5847.828 - 5873.034: 0.2365% ( 1) 00:09:11.192 5873.034 - 5898.240: 0.2437% ( 1) 00:09:11.192 5898.240 - 5923.446: 0.2724% ( 4) 00:09:11.192 5923.446 - 5948.652: 0.2939% ( 3) 00:09:11.192 5973.858 - 5999.065: 0.3010% ( 1) 00:09:11.192 5999.065 - 6024.271: 0.3082% ( 1) 00:09:11.192 6024.271 - 6049.477: 0.3154% ( 1) 00:09:11.192 6049.477 - 6074.683: 0.3225% ( 1) 00:09:11.192 6074.683 - 6099.889: 0.3297% ( 1) 00:09:11.192 6099.889 - 6125.095: 0.3440% ( 2) 00:09:11.192 6125.095 - 6150.302: 0.3655% ( 3) 00:09:11.192 6150.302 - 6175.508: 0.4014% ( 5) 00:09:11.192 6175.508 - 6200.714: 0.4372% ( 5) 00:09:11.192 6200.714 - 6225.920: 0.4659% ( 4) 00:09:11.192 6225.920 - 6251.126: 0.5017% ( 5) 00:09:11.192 6251.126 - 6276.332: 0.5734% ( 10) 00:09:11.192 6276.332 - 6301.538: 0.6236% ( 7) 00:09:11.192 6301.538 - 6326.745: 0.6952% ( 10) 00:09:11.192 6326.745 - 6351.951: 0.7382% ( 6) 00:09:11.192 6351.951 - 6377.157: 0.7741% ( 5) 00:09:11.192 6377.157 - 6402.363: 0.8028% ( 4) 00:09:11.192 6402.363 - 6427.569: 0.8529% ( 7) 00:09:11.192 6427.569 - 6452.775: 0.9174% ( 9) 00:09:11.192 6452.775 - 6503.188: 1.2686% ( 49) 00:09:11.192 6503.188 - 6553.600: 1.4192% ( 21) 00:09:11.192 6553.600 - 6604.012: 1.5123% ( 13) 00:09:11.192 6604.012 - 6654.425: 1.5983% ( 12) 00:09:11.192 6654.425 - 6704.837: 1.6987% ( 14) 00:09:11.192 6704.837 - 6755.249: 1.8277% ( 18) 00:09:11.192 6755.249 - 6805.662: 1.9495% ( 17) 00:09:11.192 6805.662 - 6856.074: 2.0069% ( 8) 00:09:11.192 6856.074 - 6906.486: 2.0427% ( 5) 00:09:11.192 6906.486 - 6956.898: 2.0786% ( 5) 00:09:11.192 6956.898 - 7007.311: 2.1072% ( 4) 00:09:11.192 7007.311 - 7057.723: 2.2219% ( 16) 00:09:11.192 7057.723 - 7108.135: 2.3653% ( 20) 00:09:11.192 7108.135 - 7158.548: 2.5803% ( 30) 00:09:11.192 7158.548 - 7208.960: 2.7595% ( 25) 00:09:11.192 7208.960 - 7259.372: 3.0175% ( 36) 00:09:11.192 7259.372 - 7309.785: 3.1752% ( 22) 00:09:11.192 7309.785 - 7360.197: 3.3902% ( 30) 00:09:11.192 7360.197 - 7410.609: 3.5909% ( 28) 00:09:11.192 7410.609 - 7461.022: 3.8059% ( 30) 00:09:11.192 7461.022 - 7511.434: 4.2073% ( 56) 00:09:11.192 7511.434 - 7561.846: 4.5728% ( 51) 00:09:11.192 7561.846 - 7612.258: 5.1534% ( 81) 00:09:11.192 7612.258 - 7662.671: 5.6336% ( 67) 00:09:11.192 7662.671 - 7713.083: 6.3432% ( 99) 00:09:11.192 7713.083 - 7763.495: 7.1459% ( 112) 00:09:11.192 7763.495 - 7813.908: 7.8698% ( 101) 00:09:11.192 7813.908 - 7864.320: 8.7084% ( 117) 00:09:11.192 7864.320 - 7914.732: 9.7979% ( 152) 00:09:11.192 7914.732 - 7965.145: 11.0808% ( 179) 00:09:11.192 7965.145 - 8015.557: 12.2778% ( 167) 00:09:11.192 8015.557 - 8065.969: 13.6253% ( 188) 00:09:11.192 8065.969 - 8116.382: 14.9656% ( 187) 00:09:11.192 8116.382 - 8166.794: 16.4923% ( 213) 00:09:11.192 8166.794 - 8217.206: 18.0046% ( 211) 00:09:11.192 8217.206 - 8267.618: 19.9326% ( 269) 00:09:11.192 8267.618 - 8318.031: 21.8893% ( 273) 00:09:11.192 8318.031 - 8368.443: 23.7959% ( 266) 00:09:11.192 8368.443 - 8418.855: 25.9604% ( 302) 00:09:11.192 8418.855 - 8469.268: 28.1322% ( 303) 00:09:11.192 8469.268 - 8519.680: 30.2752% ( 299) 00:09:11.192 8519.680 - 8570.092: 32.3251% ( 286) 00:09:11.192 8570.092 - 8620.505: 34.5040% ( 304) 00:09:11.192 8620.505 - 8670.917: 36.3389% ( 256) 00:09:11.192 8670.917 - 8721.329: 38.2669% ( 269) 00:09:11.192 8721.329 - 8771.742: 40.2380% ( 275) 00:09:11.192 8771.742 - 8822.154: 42.3452% ( 294) 00:09:11.192 8822.154 - 8872.566: 44.5169% ( 303) 00:09:11.192 8872.566 - 8922.978: 46.4163% ( 265) 00:09:11.192 8922.978 - 8973.391: 48.1651% ( 244) 00:09:11.192 8973.391 - 9023.803: 49.9068% ( 243) 00:09:11.192 9023.803 - 9074.215: 51.6987% ( 250) 00:09:11.192 9074.215 - 9124.628: 53.5407% ( 257) 00:09:11.192 9124.628 - 9175.040: 55.6623% ( 296) 00:09:11.192 9175.040 - 9225.452: 57.5760% ( 267) 00:09:11.192 9225.452 - 9275.865: 59.4610% ( 263) 00:09:11.192 9275.865 - 9326.277: 61.3389% ( 262) 00:09:11.192 9326.277 - 9376.689: 63.2024% ( 260) 00:09:11.192 9376.689 - 9427.102: 65.0158% ( 253) 00:09:11.192 9427.102 - 9477.514: 67.0155% ( 279) 00:09:11.192 9477.514 - 9527.926: 68.8217% ( 252) 00:09:11.192 9527.926 - 9578.338: 70.7497% ( 269) 00:09:11.192 9578.338 - 9628.751: 72.4699% ( 240) 00:09:11.192 9628.751 - 9679.163: 74.3406% ( 261) 00:09:11.192 9679.163 - 9729.575: 76.0608% ( 240) 00:09:11.192 9729.575 - 9779.988: 77.5516% ( 208) 00:09:11.192 9779.988 - 9830.400: 79.1069% ( 217) 00:09:11.192 9830.400 - 9880.812: 80.5906% ( 207) 00:09:11.192 9880.812 - 9931.225: 81.9237% ( 186) 00:09:11.192 9931.225 - 9981.637: 83.3357% ( 197) 00:09:11.192 9981.637 - 10032.049: 84.5614% ( 171) 00:09:11.192 10032.049 - 10082.462: 85.7081% ( 160) 00:09:11.192 10082.462 - 10132.874: 86.7689% ( 148) 00:09:11.192 10132.874 - 10183.286: 87.7795% ( 141) 00:09:11.192 10183.286 - 10233.698: 88.6611% ( 123) 00:09:11.192 10233.698 - 10284.111: 89.4710% ( 113) 00:09:11.192 10284.111 - 10334.523: 90.1304% ( 92) 00:09:11.192 10334.523 - 10384.935: 90.7182% ( 82) 00:09:11.192 10384.935 - 10435.348: 91.2629% ( 76) 00:09:11.192 10435.348 - 10485.760: 91.8363% ( 80) 00:09:11.192 10485.760 - 10536.172: 92.2878% ( 63) 00:09:11.192 10536.172 - 10586.585: 92.6964% ( 57) 00:09:11.192 10586.585 - 10636.997: 93.0189% ( 45) 00:09:11.192 10636.997 - 10687.409: 93.2913% ( 38) 00:09:11.192 10687.409 - 10737.822: 93.5278% ( 33) 00:09:11.192 10737.822 - 10788.234: 93.7357% ( 29) 00:09:11.192 10788.234 - 10838.646: 93.9579% ( 31) 00:09:11.192 10838.646 - 10889.058: 94.1514% ( 27) 00:09:11.192 10889.058 - 10939.471: 94.3019% ( 21) 00:09:11.192 10939.471 - 10989.883: 94.4524% ( 21) 00:09:11.192 10989.883 - 11040.295: 94.5886% ( 19) 00:09:11.192 11040.295 - 11090.708: 94.7033% ( 16) 00:09:11.192 11090.708 - 11141.120: 94.8538% ( 21) 00:09:11.192 11141.120 - 11191.532: 94.9756% ( 17) 00:09:11.192 11191.532 - 11241.945: 95.0903% ( 16) 00:09:11.192 11241.945 - 11292.357: 95.2050% ( 16) 00:09:11.192 11292.357 - 11342.769: 95.3842% ( 25) 00:09:11.192 11342.769 - 11393.182: 95.5060% ( 17) 00:09:11.192 11393.182 - 11443.594: 95.6135% ( 15) 00:09:11.192 11443.594 - 11494.006: 95.7354% ( 17) 00:09:11.192 11494.006 - 11544.418: 95.8357% ( 14) 00:09:11.192 11544.418 - 11594.831: 95.9074% ( 10) 00:09:11.192 11594.831 - 11645.243: 95.9934% ( 12) 00:09:11.192 11645.243 - 11695.655: 96.0651% ( 10) 00:09:11.192 11695.655 - 11746.068: 96.1368% ( 10) 00:09:11.192 11746.068 - 11796.480: 96.1869% ( 7) 00:09:11.192 11796.480 - 11846.892: 96.2586% ( 10) 00:09:11.192 11846.892 - 11897.305: 96.3016% ( 6) 00:09:11.192 11897.305 - 11947.717: 96.3589% ( 8) 00:09:11.192 11947.717 - 11998.129: 96.4235% ( 9) 00:09:11.192 11998.129 - 12048.542: 96.4736% ( 7) 00:09:11.192 12048.542 - 12098.954: 96.5668% ( 13) 00:09:11.192 12098.954 - 12149.366: 96.6456% ( 11) 00:09:11.192 12149.366 - 12199.778: 96.8320% ( 26) 00:09:11.192 12199.778 - 12250.191: 96.9323% ( 14) 00:09:11.192 12250.191 - 12300.603: 97.0470% ( 16) 00:09:11.192 12300.603 - 12351.015: 97.1259% ( 11) 00:09:11.192 12351.015 - 12401.428: 97.2405% ( 16) 00:09:11.192 12401.428 - 12451.840: 97.3337% ( 13) 00:09:11.192 12451.840 - 12502.252: 97.4269% ( 13) 00:09:11.192 12502.252 - 12552.665: 97.5129% ( 12) 00:09:11.192 12552.665 - 12603.077: 97.5774% ( 9) 00:09:11.192 12603.077 - 12653.489: 97.6634% ( 12) 00:09:11.192 12653.489 - 12703.902: 97.7423% ( 11) 00:09:11.192 12703.902 - 12754.314: 97.8068% ( 9) 00:09:11.192 12754.314 - 12804.726: 97.8784% ( 10) 00:09:11.192 12804.726 - 12855.138: 97.9358% ( 8) 00:09:11.192 12855.138 - 12905.551: 97.9931% ( 8) 00:09:11.192 12905.551 - 13006.375: 98.0935% ( 14) 00:09:11.192 13006.375 - 13107.200: 98.1866% ( 13) 00:09:11.192 13107.200 - 13208.025: 98.2511% ( 9) 00:09:11.192 13208.025 - 13308.849: 98.3157% ( 9) 00:09:11.192 13308.849 - 13409.674: 98.3658% ( 7) 00:09:11.192 13409.674 - 13510.498: 98.4160% ( 7) 00:09:11.192 13510.498 - 13611.323: 98.4733% ( 8) 00:09:11.192 13611.323 - 13712.148: 98.5378% ( 9) 00:09:11.192 13712.148 - 13812.972: 98.5880% ( 7) 00:09:11.192 13812.972 - 13913.797: 98.6525% ( 9) 00:09:11.192 13913.797 - 14014.622: 98.7027% ( 7) 00:09:11.192 14014.622 - 14115.446: 98.7600% ( 8) 00:09:11.192 14115.446 - 14216.271: 98.8245% ( 9) 00:09:11.192 14216.271 - 14317.095: 98.8747% ( 7) 00:09:11.192 14317.095 - 14417.920: 98.9321% ( 8) 00:09:11.192 14417.920 - 14518.745: 98.9966% ( 9) 00:09:11.192 14518.745 - 14619.569: 99.0467% ( 7) 00:09:11.192 14619.569 - 14720.394: 99.0611% ( 2) 00:09:11.192 14720.394 - 14821.218: 99.0754% ( 2) 00:09:11.192 14821.218 - 14922.043: 99.0826% ( 1) 00:09:11.192 19559.975 - 19660.800: 99.1041% ( 3) 00:09:11.192 19660.800 - 19761.625: 99.1256% ( 3) 00:09:11.192 19761.625 - 19862.449: 99.1829% ( 8) 00:09:11.193 19862.449 - 19963.274: 99.2689% ( 12) 00:09:11.193 19963.274 - 20064.098: 99.3191% ( 7) 00:09:11.193 20064.098 - 20164.923: 99.3836% ( 9) 00:09:11.193 20164.923 - 20265.748: 99.4266% ( 6) 00:09:11.193 20265.748 - 20366.572: 99.4553% ( 4) 00:09:11.193 20366.572 - 20467.397: 99.4624% ( 1) 00:09:11.193 20769.871 - 20870.695: 99.4696% ( 1) 00:09:11.193 20870.695 - 20971.520: 99.4983% ( 4) 00:09:11.193 20971.520 - 21072.345: 99.5198% ( 3) 00:09:11.193 21072.345 - 21173.169: 99.5413% ( 3) 00:09:11.193 21173.169 - 21273.994: 99.5628% ( 3) 00:09:11.193 21273.994 - 21374.818: 99.5915% ( 4) 00:09:11.193 21374.818 - 21475.643: 99.6058% ( 2) 00:09:11.193 21475.643 - 21576.468: 99.6345% ( 4) 00:09:11.193 21576.468 - 21677.292: 99.6560% ( 3) 00:09:11.193 21677.292 - 21778.117: 99.6775% ( 3) 00:09:11.193 21778.117 - 21878.942: 99.6990% ( 3) 00:09:11.193 21878.942 - 21979.766: 99.7276% ( 4) 00:09:11.193 21979.766 - 22080.591: 99.7563% ( 4) 00:09:11.193 22080.591 - 22181.415: 99.7778% ( 3) 00:09:11.193 22181.415 - 22282.240: 99.7993% ( 3) 00:09:11.193 22282.240 - 22383.065: 99.8280% ( 4) 00:09:11.193 22383.065 - 22483.889: 99.8495% ( 3) 00:09:11.193 22483.889 - 22584.714: 99.8710% ( 3) 00:09:11.193 22584.714 - 22685.538: 99.8997% ( 4) 00:09:11.193 22685.538 - 22786.363: 99.9283% ( 4) 00:09:11.193 22786.363 - 22887.188: 99.9498% ( 3) 00:09:11.193 22887.188 - 22988.012: 99.9713% ( 3) 00:09:11.193 22988.012 - 23088.837: 99.9928% ( 3) 00:09:11.193 23088.837 - 23189.662: 100.0000% ( 1) 00:09:11.193 00:09:11.193 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:11.193 ============================================================================== 00:09:11.193 Range in us Cumulative IO count 00:09:11.193 5923.446 - 5948.652: 0.0143% ( 2) 00:09:11.193 5948.652 - 5973.858: 0.0430% ( 4) 00:09:11.193 5973.858 - 5999.065: 0.0645% ( 3) 00:09:11.193 5999.065 - 6024.271: 0.0860% ( 3) 00:09:11.193 6024.271 - 6049.477: 0.1362% ( 7) 00:09:11.193 6049.477 - 6074.683: 0.1864% ( 7) 00:09:11.193 6074.683 - 6099.889: 0.2437% ( 8) 00:09:11.193 6099.889 - 6125.095: 0.2867% ( 6) 00:09:11.193 6125.095 - 6150.302: 0.3727% ( 12) 00:09:11.193 6150.302 - 6175.508: 0.4372% ( 9) 00:09:11.193 6175.508 - 6200.714: 0.7096% ( 38) 00:09:11.193 6200.714 - 6225.920: 0.9676% ( 36) 00:09:11.193 6225.920 - 6251.126: 1.0106% ( 6) 00:09:11.193 6251.126 - 6276.332: 1.0536% ( 6) 00:09:11.193 6276.332 - 6301.538: 1.1038% ( 7) 00:09:11.193 6301.538 - 6326.745: 1.1540% ( 7) 00:09:11.193 6326.745 - 6351.951: 1.2185% ( 9) 00:09:11.193 6351.951 - 6377.157: 1.2973% ( 11) 00:09:11.193 6377.157 - 6402.363: 1.5482% ( 35) 00:09:11.193 6402.363 - 6427.569: 1.5697% ( 3) 00:09:11.193 6427.569 - 6452.775: 1.5912% ( 3) 00:09:11.193 6452.775 - 6503.188: 1.6270% ( 5) 00:09:11.193 6503.188 - 6553.600: 1.6485% ( 3) 00:09:11.193 6553.600 - 6604.012: 1.6843% ( 5) 00:09:11.193 6604.012 - 6654.425: 1.7058% ( 3) 00:09:11.193 6654.425 - 6704.837: 1.7417% ( 5) 00:09:11.193 6704.837 - 6755.249: 1.7847% ( 6) 00:09:11.193 6755.249 - 6805.662: 1.8205% ( 5) 00:09:11.193 6805.662 - 6856.074: 1.8994% ( 11) 00:09:11.193 6856.074 - 6906.486: 1.9854% ( 12) 00:09:11.193 6906.486 - 6956.898: 2.0786% ( 13) 00:09:11.193 6956.898 - 7007.311: 2.1717% ( 13) 00:09:11.193 7007.311 - 7057.723: 2.3366% ( 23) 00:09:11.193 7057.723 - 7108.135: 2.5014% ( 23) 00:09:11.193 7108.135 - 7158.548: 2.6806% ( 25) 00:09:11.193 7158.548 - 7208.960: 2.8240% ( 20) 00:09:11.193 7208.960 - 7259.372: 2.9530% ( 18) 00:09:11.193 7259.372 - 7309.785: 3.0963% ( 20) 00:09:11.193 7309.785 - 7360.197: 3.2970% ( 28) 00:09:11.193 7360.197 - 7410.609: 3.5192% ( 31) 00:09:11.193 7410.609 - 7461.022: 3.7844% ( 37) 00:09:11.193 7461.022 - 7511.434: 4.1499% ( 51) 00:09:11.193 7511.434 - 7561.846: 4.4653% ( 44) 00:09:11.193 7561.846 - 7612.258: 4.9025% ( 61) 00:09:11.193 7612.258 - 7662.671: 5.1606% ( 36) 00:09:11.193 7662.671 - 7713.083: 5.4257% ( 37) 00:09:11.193 7713.083 - 7763.495: 5.7339% ( 43) 00:09:11.193 7763.495 - 7813.908: 6.1353% ( 56) 00:09:11.193 7813.908 - 7864.320: 6.6084% ( 66) 00:09:11.193 7864.320 - 7914.732: 7.1818% ( 80) 00:09:11.193 7914.732 - 7965.145: 7.9057% ( 101) 00:09:11.193 7965.145 - 8015.557: 8.7873% ( 123) 00:09:11.193 8015.557 - 8065.969: 9.8767% ( 152) 00:09:11.193 8065.969 - 8116.382: 11.0737% ( 167) 00:09:11.193 8116.382 - 8166.794: 12.3925% ( 184) 00:09:11.193 8166.794 - 8217.206: 13.9120% ( 212) 00:09:11.193 8217.206 - 8267.618: 15.6680% ( 245) 00:09:11.193 8267.618 - 8318.031: 17.2807% ( 225) 00:09:11.193 8318.031 - 8368.443: 19.0869% ( 252) 00:09:11.193 8368.443 - 8418.855: 21.2299% ( 299) 00:09:11.193 8418.855 - 8469.268: 23.4447% ( 309) 00:09:11.193 8469.268 - 8519.680: 25.7096% ( 316) 00:09:11.193 8519.680 - 8570.092: 28.2827% ( 359) 00:09:11.193 8570.092 - 8620.505: 31.4220% ( 438) 00:09:11.193 8620.505 - 8670.917: 33.9880% ( 358) 00:09:11.193 8670.917 - 8721.329: 36.6542% ( 372) 00:09:11.193 8721.329 - 8771.742: 39.1987% ( 355) 00:09:11.193 8771.742 - 8822.154: 41.5424% ( 327) 00:09:11.193 8822.154 - 8872.566: 43.9435% ( 335) 00:09:11.193 8872.566 - 8922.978: 46.4306% ( 347) 00:09:11.193 8922.978 - 8973.391: 48.9464% ( 351) 00:09:11.193 8973.391 - 9023.803: 51.3761% ( 339) 00:09:11.193 9023.803 - 9074.215: 53.8919% ( 351) 00:09:11.193 9074.215 - 9124.628: 56.2930% ( 335) 00:09:11.193 9124.628 - 9175.040: 58.5507% ( 315) 00:09:11.193 9175.040 - 9225.452: 60.7081% ( 301) 00:09:11.193 9225.452 - 9275.865: 63.0017% ( 320) 00:09:11.193 9275.865 - 9326.277: 65.3813% ( 332) 00:09:11.193 9326.277 - 9376.689: 67.6606% ( 318) 00:09:11.193 9376.689 - 9427.102: 69.8538% ( 306) 00:09:11.193 9427.102 - 9477.514: 72.1044% ( 314) 00:09:11.193 9477.514 - 9527.926: 74.1542% ( 286) 00:09:11.193 9527.926 - 9578.338: 76.0679% ( 267) 00:09:11.193 9578.338 - 9628.751: 77.8311% ( 246) 00:09:11.193 9628.751 - 9679.163: 79.5728% ( 243) 00:09:11.193 9679.163 - 9729.575: 81.2500% ( 234) 00:09:11.193 9729.575 - 9779.988: 82.7910% ( 215) 00:09:11.193 9779.988 - 9830.400: 84.0023% ( 169) 00:09:11.193 9830.400 - 9880.812: 85.2638% ( 176) 00:09:11.193 9880.812 - 9931.225: 86.3460% ( 151) 00:09:11.193 9931.225 - 9981.637: 87.3853% ( 145) 00:09:11.193 9981.637 - 10032.049: 88.2024% ( 114) 00:09:11.193 10032.049 - 10082.462: 89.0123% ( 113) 00:09:11.193 10082.462 - 10132.874: 89.7434% ( 102) 00:09:11.193 10132.874 - 10183.286: 90.4243% ( 95) 00:09:11.193 10183.286 - 10233.698: 91.0049% ( 81) 00:09:11.193 10233.698 - 10284.111: 91.4994% ( 69) 00:09:11.193 10284.111 - 10334.523: 91.9295% ( 60) 00:09:11.193 10334.523 - 10384.935: 92.3022% ( 52) 00:09:11.193 10384.935 - 10435.348: 92.6534% ( 49) 00:09:11.193 10435.348 - 10485.760: 92.9759% ( 45) 00:09:11.193 10485.760 - 10536.172: 93.2698% ( 41) 00:09:11.193 10536.172 - 10586.585: 93.5350% ( 37) 00:09:11.193 10586.585 - 10636.997: 93.7787% ( 34) 00:09:11.193 10636.997 - 10687.409: 93.9722% ( 27) 00:09:11.193 10687.409 - 10737.822: 94.1585% ( 26) 00:09:11.193 10737.822 - 10788.234: 94.3521% ( 27) 00:09:11.193 10788.234 - 10838.646: 94.5169% ( 23) 00:09:11.193 10838.646 - 10889.058: 94.6818% ( 23) 00:09:11.193 10889.058 - 10939.471: 94.8251% ( 20) 00:09:11.193 10939.471 - 10989.883: 94.9828% ( 22) 00:09:11.193 10989.883 - 11040.295: 95.1261% ( 20) 00:09:11.193 11040.295 - 11090.708: 95.2480% ( 17) 00:09:11.193 11090.708 - 11141.120: 95.3412% ( 13) 00:09:11.193 11141.120 - 11191.532: 95.4415% ( 14) 00:09:11.193 11191.532 - 11241.945: 95.5275% ( 12) 00:09:11.193 11241.945 - 11292.357: 95.5777% ( 7) 00:09:11.193 11292.357 - 11342.769: 95.6350% ( 8) 00:09:11.193 11342.769 - 11393.182: 95.7067% ( 10) 00:09:11.193 11393.182 - 11443.594: 95.7569% ( 7) 00:09:11.193 11443.594 - 11494.006: 95.8071% ( 7) 00:09:11.193 11494.006 - 11544.418: 95.8572% ( 7) 00:09:11.193 11544.418 - 11594.831: 95.9146% ( 8) 00:09:11.193 11594.831 - 11645.243: 95.9576% ( 6) 00:09:11.193 11645.243 - 11695.655: 95.9862% ( 4) 00:09:11.193 11695.655 - 11746.068: 96.0364% ( 7) 00:09:11.194 11746.068 - 11796.480: 96.0794% ( 6) 00:09:11.194 11796.480 - 11846.892: 96.1296% ( 7) 00:09:11.194 11846.892 - 11897.305: 96.1726% ( 6) 00:09:11.194 11897.305 - 11947.717: 96.2084% ( 5) 00:09:11.194 11947.717 - 11998.129: 96.2658% ( 8) 00:09:11.194 11998.129 - 12048.542: 96.3231% ( 8) 00:09:11.194 12048.542 - 12098.954: 96.3589% ( 5) 00:09:11.194 12098.954 - 12149.366: 96.4235% ( 9) 00:09:11.194 12149.366 - 12199.778: 96.4951% ( 10) 00:09:11.194 12199.778 - 12250.191: 96.5525% ( 8) 00:09:11.194 12250.191 - 12300.603: 96.6098% ( 8) 00:09:11.194 12300.603 - 12351.015: 96.6456% ( 5) 00:09:11.194 12351.015 - 12401.428: 96.7030% ( 8) 00:09:11.194 12401.428 - 12451.840: 96.7388% ( 5) 00:09:11.194 12451.840 - 12502.252: 96.8033% ( 9) 00:09:11.194 12502.252 - 12552.665: 96.8463% ( 6) 00:09:11.194 12552.665 - 12603.077: 96.9180% ( 10) 00:09:11.194 12603.077 - 12653.489: 96.9897% ( 10) 00:09:11.194 12653.489 - 12703.902: 97.0614% ( 10) 00:09:11.194 12703.902 - 12754.314: 97.1402% ( 11) 00:09:11.194 12754.314 - 12804.726: 97.2262% ( 12) 00:09:11.194 12804.726 - 12855.138: 97.2907% ( 9) 00:09:11.194 12855.138 - 12905.551: 97.3552% ( 9) 00:09:11.194 12905.551 - 13006.375: 97.5057% ( 21) 00:09:11.194 13006.375 - 13107.200: 97.6347% ( 18) 00:09:11.194 13107.200 - 13208.025: 97.7924% ( 22) 00:09:11.194 13208.025 - 13308.849: 97.9286% ( 19) 00:09:11.194 13308.849 - 13409.674: 98.0648% ( 19) 00:09:11.194 13409.674 - 13510.498: 98.1866% ( 17) 00:09:11.194 13510.498 - 13611.323: 98.3300% ( 20) 00:09:11.194 13611.323 - 13712.148: 98.4160% ( 12) 00:09:11.194 13712.148 - 13812.972: 98.5665% ( 21) 00:09:11.194 13812.972 - 13913.797: 98.7027% ( 19) 00:09:11.194 13913.797 - 14014.622: 98.7887% ( 12) 00:09:11.194 14014.622 - 14115.446: 98.8317% ( 6) 00:09:11.194 14115.446 - 14216.271: 98.8747% ( 6) 00:09:11.194 14216.271 - 14317.095: 98.9177% ( 6) 00:09:11.194 14317.095 - 14417.920: 98.9607% ( 6) 00:09:11.194 14417.920 - 14518.745: 99.0037% ( 6) 00:09:11.194 14518.745 - 14619.569: 99.0467% ( 6) 00:09:11.194 14619.569 - 14720.394: 99.0826% ( 5) 00:09:11.194 20870.695 - 20971.520: 99.0897% ( 1) 00:09:11.194 20971.520 - 21072.345: 99.1184% ( 4) 00:09:11.194 21072.345 - 21173.169: 99.1542% ( 5) 00:09:11.194 21173.169 - 21273.994: 99.1972% ( 6) 00:09:11.194 21273.994 - 21374.818: 99.2259% ( 4) 00:09:11.194 21374.818 - 21475.643: 99.2546% ( 4) 00:09:11.194 21475.643 - 21576.468: 99.2976% ( 6) 00:09:11.194 21576.468 - 21677.292: 99.3334% ( 5) 00:09:11.194 21677.292 - 21778.117: 99.3621% ( 4) 00:09:11.194 21778.117 - 21878.942: 99.4051% ( 6) 00:09:11.194 21878.942 - 21979.766: 99.4338% ( 4) 00:09:11.194 21979.766 - 22080.591: 99.4839% ( 7) 00:09:11.194 22080.591 - 22181.415: 99.5556% ( 10) 00:09:11.194 22181.415 - 22282.240: 99.6631% ( 15) 00:09:11.194 22282.240 - 22383.065: 99.7133% ( 7) 00:09:11.194 22383.065 - 22483.889: 99.7635% ( 7) 00:09:11.194 22483.889 - 22584.714: 99.8208% ( 8) 00:09:11.194 22584.714 - 22685.538: 99.8638% ( 6) 00:09:11.194 22685.538 - 22786.363: 99.9068% ( 6) 00:09:11.194 22786.363 - 22887.188: 99.9355% ( 4) 00:09:11.194 22887.188 - 22988.012: 99.9570% ( 3) 00:09:11.194 22988.012 - 23088.837: 99.9857% ( 4) 00:09:11.194 23088.837 - 23189.662: 100.0000% ( 2) 00:09:11.194 00:09:11.194 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:11.194 ============================================================================== 00:09:11.194 Range in us Cumulative IO count 00:09:11.194 4864.788 - 4889.994: 0.0072% ( 1) 00:09:11.194 5091.643 - 5116.849: 0.0287% ( 3) 00:09:11.194 5116.849 - 5142.055: 0.0573% ( 4) 00:09:11.194 5142.055 - 5167.262: 0.0717% ( 2) 00:09:11.194 5167.262 - 5192.468: 0.1147% ( 6) 00:09:11.194 5192.468 - 5217.674: 0.1362% ( 3) 00:09:11.194 5217.674 - 5242.880: 0.1577% ( 3) 00:09:11.194 5242.880 - 5268.086: 0.1864% ( 4) 00:09:11.194 5268.086 - 5293.292: 0.2150% ( 4) 00:09:11.194 5293.292 - 5318.498: 0.2509% ( 5) 00:09:11.194 5318.498 - 5343.705: 0.2652% ( 2) 00:09:11.194 5343.705 - 5368.911: 0.2867% ( 3) 00:09:11.194 5368.911 - 5394.117: 0.3225% ( 5) 00:09:11.194 5394.117 - 5419.323: 0.3870% ( 9) 00:09:11.194 5419.323 - 5444.529: 0.4372% ( 7) 00:09:11.194 5444.529 - 5469.735: 0.4731% ( 5) 00:09:11.194 5469.735 - 5494.942: 0.4874% ( 2) 00:09:11.194 5494.942 - 5520.148: 0.5017% ( 2) 00:09:11.194 5520.148 - 5545.354: 0.5161% ( 2) 00:09:11.194 5545.354 - 5570.560: 0.5304% ( 2) 00:09:11.194 5570.560 - 5595.766: 0.5447% ( 2) 00:09:11.194 5595.766 - 5620.972: 0.5806% ( 5) 00:09:11.194 5620.972 - 5646.178: 0.6236% ( 6) 00:09:11.194 5646.178 - 5671.385: 0.6522% ( 4) 00:09:11.194 5671.385 - 5696.591: 0.6737% ( 3) 00:09:11.194 5696.591 - 5721.797: 0.7096% ( 5) 00:09:11.194 5721.797 - 5747.003: 0.7382% ( 4) 00:09:11.194 5747.003 - 5772.209: 0.7597% ( 3) 00:09:11.194 5772.209 - 5797.415: 0.7956% ( 5) 00:09:11.194 5797.415 - 5822.622: 0.8314% ( 5) 00:09:11.194 5822.622 - 5847.828: 0.8601% ( 4) 00:09:11.194 5847.828 - 5873.034: 0.8888% ( 4) 00:09:11.194 5873.034 - 5898.240: 0.9318% ( 6) 00:09:11.194 5898.240 - 5923.446: 0.9891% ( 8) 00:09:11.194 5923.446 - 5948.652: 1.0393% ( 7) 00:09:11.194 5948.652 - 5973.858: 1.0751% ( 5) 00:09:11.194 5973.858 - 5999.065: 1.1253% ( 7) 00:09:11.194 5999.065 - 6024.271: 1.2113% ( 12) 00:09:11.194 6024.271 - 6049.477: 1.4263% ( 30) 00:09:11.194 6049.477 - 6074.683: 1.4765% ( 7) 00:09:11.194 6074.683 - 6099.889: 1.5195% ( 6) 00:09:11.194 6099.889 - 6125.095: 1.5482% ( 4) 00:09:11.194 6125.095 - 6150.302: 1.5840% ( 5) 00:09:11.194 6150.302 - 6175.508: 1.6055% ( 3) 00:09:11.194 6175.508 - 6200.714: 1.6413% ( 5) 00:09:11.194 6200.714 - 6225.920: 1.6628% ( 3) 00:09:11.194 6225.920 - 6251.126: 1.6915% ( 4) 00:09:11.194 6251.126 - 6276.332: 1.7130% ( 3) 00:09:11.194 6276.332 - 6301.538: 1.7489% ( 5) 00:09:11.194 6301.538 - 6326.745: 1.7847% ( 5) 00:09:11.194 6326.745 - 6351.951: 1.8134% ( 4) 00:09:11.194 6351.951 - 6377.157: 1.8850% ( 10) 00:09:11.194 6377.157 - 6402.363: 1.9424% ( 8) 00:09:11.194 6402.363 - 6427.569: 2.2864% ( 48) 00:09:11.194 6427.569 - 6452.775: 2.5659% ( 39) 00:09:11.194 6452.775 - 6503.188: 2.6519% ( 12) 00:09:11.194 6503.188 - 6553.600: 2.7523% ( 14) 00:09:11.194 6553.600 - 6604.012: 2.8168% ( 9) 00:09:11.194 6604.012 - 6654.425: 2.8885% ( 10) 00:09:11.194 6654.425 - 6704.837: 2.9530% ( 9) 00:09:11.194 6704.837 - 6755.249: 3.0318% ( 11) 00:09:11.194 6755.249 - 6805.662: 3.0963% ( 9) 00:09:11.194 6805.662 - 6856.074: 3.2253% ( 18) 00:09:11.194 6856.074 - 6906.486: 3.3544% ( 18) 00:09:11.194 6906.486 - 6956.898: 3.4977% ( 20) 00:09:11.194 6956.898 - 7007.311: 3.6339% ( 19) 00:09:11.194 7007.311 - 7057.723: 3.7916% ( 22) 00:09:11.194 7057.723 - 7108.135: 3.9636% ( 24) 00:09:11.194 7108.135 - 7158.548: 4.1356% ( 24) 00:09:11.194 7158.548 - 7208.960: 4.4581% ( 45) 00:09:11.194 7208.960 - 7259.372: 4.8237% ( 51) 00:09:11.194 7259.372 - 7309.785: 5.1462% ( 45) 00:09:11.194 7309.785 - 7360.197: 5.3541% ( 29) 00:09:11.194 7360.197 - 7410.609: 5.5834% ( 32) 00:09:11.194 7410.609 - 7461.022: 5.7985% ( 30) 00:09:11.194 7461.022 - 7511.434: 6.0636% ( 37) 00:09:11.194 7511.434 - 7561.846: 6.2930% ( 32) 00:09:11.194 7561.846 - 7612.258: 6.4794% ( 26) 00:09:11.194 7612.258 - 7662.671: 6.7087% ( 32) 00:09:11.194 7662.671 - 7713.083: 6.9524% ( 34) 00:09:11.194 7713.083 - 7763.495: 7.2749% ( 45) 00:09:11.194 7763.495 - 7813.908: 7.5975% ( 45) 00:09:11.194 7813.908 - 7864.320: 7.9057% ( 43) 00:09:11.194 7864.320 - 7914.732: 8.2282% ( 45) 00:09:11.194 7914.732 - 7965.145: 8.6726% ( 62) 00:09:11.194 7965.145 - 8015.557: 9.2388% ( 79) 00:09:11.194 8015.557 - 8065.969: 10.0272% ( 110) 00:09:11.194 8065.969 - 8116.382: 11.0307% ( 140) 00:09:11.194 8116.382 - 8166.794: 12.3351% ( 182) 00:09:11.194 8166.794 - 8217.206: 13.7615% ( 199) 00:09:11.194 8217.206 - 8267.618: 15.4530% ( 236) 00:09:11.194 8267.618 - 8318.031: 17.2663% ( 253) 00:09:11.194 8318.031 - 8368.443: 19.0654% ( 251) 00:09:11.194 8368.443 - 8418.855: 21.1583% ( 292) 00:09:11.194 8418.855 - 8469.268: 23.2153% ( 287) 00:09:11.194 8469.268 - 8519.680: 25.3584% ( 299) 00:09:11.194 8519.680 - 8570.092: 27.6735% ( 323) 00:09:11.194 8570.092 - 8620.505: 30.2466% ( 359) 00:09:11.194 8620.505 - 8670.917: 32.9415% ( 376) 00:09:11.194 8670.917 - 8721.329: 35.3211% ( 332) 00:09:11.194 8721.329 - 8771.742: 37.9229% ( 363) 00:09:11.194 8771.742 - 8822.154: 40.4386% ( 351) 00:09:11.194 8822.154 - 8872.566: 43.1408% ( 377) 00:09:11.194 8872.566 - 8922.978: 45.6565% ( 351) 00:09:11.194 8922.978 - 8973.391: 48.2010% ( 355) 00:09:11.194 8973.391 - 9023.803: 50.9246% ( 380) 00:09:11.194 9023.803 - 9074.215: 53.3974% ( 345) 00:09:11.194 9074.215 - 9124.628: 55.8128% ( 337) 00:09:11.194 9124.628 - 9175.040: 58.2210% ( 336) 00:09:11.194 9175.040 - 9225.452: 60.6293% ( 336) 00:09:11.194 9225.452 - 9275.865: 63.0089% ( 332) 00:09:11.194 9275.865 - 9326.277: 65.1663% ( 301) 00:09:11.194 9326.277 - 9376.689: 67.3954% ( 311) 00:09:11.194 9376.689 - 9427.102: 69.7678% ( 331) 00:09:11.194 9427.102 - 9477.514: 71.8105% ( 285) 00:09:11.194 9477.514 - 9527.926: 73.8245% ( 281) 00:09:11.194 9527.926 - 9578.338: 75.7526% ( 269) 00:09:11.194 9578.338 - 9628.751: 77.5158% ( 246) 00:09:11.194 9628.751 - 9679.163: 79.1714% ( 231) 00:09:11.194 9679.163 - 9729.575: 80.6336% ( 204) 00:09:11.194 9729.575 - 9779.988: 82.0241% ( 194) 00:09:11.195 9779.988 - 9830.400: 83.2856% ( 176) 00:09:11.195 9830.400 - 9880.812: 84.5542% ( 177) 00:09:11.195 9880.812 - 9931.225: 85.7296% ( 164) 00:09:11.195 9931.225 - 9981.637: 86.7259% ( 139) 00:09:11.195 9981.637 - 10032.049: 87.6433% ( 128) 00:09:11.195 10032.049 - 10082.462: 88.5464% ( 126) 00:09:11.195 10082.462 - 10132.874: 89.3492% ( 112) 00:09:11.195 10132.874 - 10183.286: 90.0659% ( 100) 00:09:11.195 10183.286 - 10233.698: 90.6967% ( 88) 00:09:11.195 10233.698 - 10284.111: 91.2557% ( 78) 00:09:11.195 10284.111 - 10334.523: 91.7503% ( 69) 00:09:11.195 10334.523 - 10384.935: 92.1875% ( 61) 00:09:11.195 10384.935 - 10435.348: 92.6319% ( 62) 00:09:11.195 10435.348 - 10485.760: 92.9903% ( 50) 00:09:11.195 10485.760 - 10536.172: 93.3343% ( 48) 00:09:11.195 10536.172 - 10586.585: 93.6282% ( 41) 00:09:11.195 10586.585 - 10636.997: 93.8217% ( 27) 00:09:11.195 10636.997 - 10687.409: 94.0009% ( 25) 00:09:11.195 10687.409 - 10737.822: 94.1657% ( 23) 00:09:11.195 10737.822 - 10788.234: 94.3377% ( 24) 00:09:11.195 10788.234 - 10838.646: 94.5097% ( 24) 00:09:11.195 10838.646 - 10889.058: 94.6674% ( 22) 00:09:11.195 10889.058 - 10939.471: 94.7893% ( 17) 00:09:11.195 10939.471 - 10989.883: 94.8753% ( 12) 00:09:11.195 10989.883 - 11040.295: 94.9398% ( 9) 00:09:11.195 11040.295 - 11090.708: 95.0043% ( 9) 00:09:11.195 11090.708 - 11141.120: 95.0831% ( 11) 00:09:11.195 11141.120 - 11191.532: 95.1620% ( 11) 00:09:11.195 11191.532 - 11241.945: 95.2337% ( 10) 00:09:11.195 11241.945 - 11292.357: 95.3053% ( 10) 00:09:11.195 11292.357 - 11342.769: 95.3913% ( 12) 00:09:11.195 11342.769 - 11393.182: 95.4558% ( 9) 00:09:11.195 11393.182 - 11443.594: 95.5562% ( 14) 00:09:11.195 11443.594 - 11494.006: 95.6350% ( 11) 00:09:11.195 11494.006 - 11544.418: 95.7067% ( 10) 00:09:11.195 11544.418 - 11594.831: 95.7784% ( 10) 00:09:11.195 11594.831 - 11645.243: 95.8501% ( 10) 00:09:11.195 11645.243 - 11695.655: 95.9361% ( 12) 00:09:11.195 11695.655 - 11746.068: 96.0006% ( 9) 00:09:11.195 11746.068 - 11796.480: 96.0794% ( 11) 00:09:11.195 11796.480 - 11846.892: 96.1583% ( 11) 00:09:11.195 11846.892 - 11897.305: 96.2586% ( 14) 00:09:11.195 11897.305 - 11947.717: 96.3876% ( 18) 00:09:11.195 11947.717 - 11998.129: 96.4808% ( 13) 00:09:11.195 11998.129 - 12048.542: 96.5381% ( 8) 00:09:11.195 12048.542 - 12098.954: 96.6098% ( 10) 00:09:11.195 12098.954 - 12149.366: 96.6815% ( 10) 00:09:11.195 12149.366 - 12199.778: 96.7603% ( 11) 00:09:11.195 12199.778 - 12250.191: 96.8248% ( 9) 00:09:11.195 12250.191 - 12300.603: 96.8893% ( 9) 00:09:11.195 12300.603 - 12351.015: 96.9610% ( 10) 00:09:11.195 12351.015 - 12401.428: 97.0327% ( 10) 00:09:11.195 12401.428 - 12451.840: 97.0972% ( 9) 00:09:11.195 12451.840 - 12502.252: 97.1689% ( 10) 00:09:11.195 12502.252 - 12552.665: 97.2262% ( 8) 00:09:11.195 12552.665 - 12603.077: 97.2907% ( 9) 00:09:11.195 12603.077 - 12653.489: 97.3552% ( 9) 00:09:11.195 12653.489 - 12703.902: 97.4126% ( 8) 00:09:11.195 12703.902 - 12754.314: 97.4699% ( 8) 00:09:11.195 12754.314 - 12804.726: 97.5057% ( 5) 00:09:11.195 12804.726 - 12855.138: 97.5344% ( 4) 00:09:11.195 12855.138 - 12905.551: 97.5702% ( 5) 00:09:11.195 12905.551 - 13006.375: 97.6347% ( 9) 00:09:11.195 13006.375 - 13107.200: 97.6993% ( 9) 00:09:11.195 13107.200 - 13208.025: 97.7638% ( 9) 00:09:11.195 13208.025 - 13308.849: 97.8283% ( 9) 00:09:11.195 13308.849 - 13409.674: 97.8999% ( 10) 00:09:11.195 13409.674 - 13510.498: 97.9716% ( 10) 00:09:11.195 13510.498 - 13611.323: 98.0290% ( 8) 00:09:11.195 13611.323 - 13712.148: 98.0863% ( 8) 00:09:11.195 13712.148 - 13812.972: 98.1436% ( 8) 00:09:11.195 13812.972 - 13913.797: 98.1938% ( 7) 00:09:11.195 13913.797 - 14014.622: 98.2583% ( 9) 00:09:11.195 14014.622 - 14115.446: 98.3085% ( 7) 00:09:11.195 14115.446 - 14216.271: 98.3730% ( 9) 00:09:11.195 14216.271 - 14317.095: 98.4232% ( 7) 00:09:11.195 14317.095 - 14417.920: 98.4518% ( 4) 00:09:11.195 14417.920 - 14518.745: 98.4805% ( 4) 00:09:11.195 14518.745 - 14619.569: 98.5163% ( 5) 00:09:11.195 14619.569 - 14720.394: 98.5378% ( 3) 00:09:11.195 14720.394 - 14821.218: 98.5737% ( 5) 00:09:11.195 14821.218 - 14922.043: 98.9034% ( 46) 00:09:11.195 14922.043 - 15022.868: 98.9392% ( 5) 00:09:11.195 15022.868 - 15123.692: 98.9679% ( 4) 00:09:11.195 15123.692 - 15224.517: 98.9894% ( 3) 00:09:11.195 15224.517 - 15325.342: 99.0109% ( 3) 00:09:11.195 15325.342 - 15426.166: 99.0396% ( 4) 00:09:11.195 15426.166 - 15526.991: 99.0826% ( 6) 00:09:11.195 22181.415 - 22282.240: 99.0897% ( 1) 00:09:11.195 22383.065 - 22483.889: 99.0969% ( 1) 00:09:11.195 22483.889 - 22584.714: 99.1327% ( 5) 00:09:11.195 22584.714 - 22685.538: 99.1542% ( 3) 00:09:11.195 22685.538 - 22786.363: 99.2188% ( 9) 00:09:11.195 22786.363 - 22887.188: 99.3048% ( 12) 00:09:11.195 22887.188 - 22988.012: 99.4123% ( 15) 00:09:11.195 22988.012 - 23088.837: 99.5843% ( 24) 00:09:11.195 23088.837 - 23189.662: 99.6846% ( 14) 00:09:11.195 23189.662 - 23290.486: 99.7491% ( 9) 00:09:11.195 23290.486 - 23391.311: 99.7850% ( 5) 00:09:11.195 23391.311 - 23492.135: 99.8208% ( 5) 00:09:11.195 23492.135 - 23592.960: 99.8638% ( 6) 00:09:11.195 23592.960 - 23693.785: 99.8997% ( 5) 00:09:11.195 23693.785 - 23794.609: 99.9427% ( 6) 00:09:11.195 23794.609 - 23895.434: 99.9785% ( 5) 00:09:11.195 23895.434 - 23996.258: 100.0000% ( 3) 00:09:11.195 00:09:11.195 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:11.195 ============================================================================== 00:09:11.195 Range in us Cumulative IO count 00:09:11.195 4385.871 - 4411.077: 0.0072% ( 1) 00:09:11.195 4436.283 - 4461.489: 0.0215% ( 2) 00:09:11.195 4461.489 - 4486.695: 0.0502% ( 4) 00:09:11.195 4486.695 - 4511.902: 0.0645% ( 2) 00:09:11.195 4511.902 - 4537.108: 0.0932% ( 4) 00:09:11.195 4537.108 - 4562.314: 0.1218% ( 4) 00:09:11.195 4562.314 - 4587.520: 0.1577% ( 5) 00:09:11.195 4587.520 - 4612.726: 0.1792% ( 3) 00:09:11.195 4612.726 - 4637.932: 0.2150% ( 5) 00:09:11.195 4637.932 - 4663.138: 0.2365% ( 3) 00:09:11.195 4663.138 - 4688.345: 0.2580% ( 3) 00:09:11.195 4688.345 - 4713.551: 0.2867% ( 4) 00:09:11.195 4713.551 - 4738.757: 0.3082% ( 3) 00:09:11.195 4738.757 - 4763.963: 0.3369% ( 4) 00:09:11.195 4763.963 - 4789.169: 0.3655% ( 4) 00:09:11.195 4789.169 - 4814.375: 0.4300% ( 9) 00:09:11.195 4814.375 - 4839.582: 0.4587% ( 4) 00:09:11.195 4839.582 - 4864.788: 0.4731% ( 2) 00:09:11.195 4864.788 - 4889.994: 0.4874% ( 2) 00:09:11.195 4889.994 - 4915.200: 0.5017% ( 2) 00:09:11.195 4915.200 - 4940.406: 0.5089% ( 1) 00:09:11.195 4940.406 - 4965.612: 0.5304% ( 3) 00:09:11.195 4965.612 - 4990.818: 0.5376% ( 1) 00:09:11.195 4990.818 - 5016.025: 0.5519% ( 2) 00:09:11.195 5016.025 - 5041.231: 0.5662% ( 2) 00:09:11.195 5041.231 - 5066.437: 0.5734% ( 1) 00:09:11.195 5066.437 - 5091.643: 0.5877% ( 2) 00:09:11.195 5091.643 - 5116.849: 0.6021% ( 2) 00:09:11.195 5116.849 - 5142.055: 0.6164% ( 2) 00:09:11.195 5142.055 - 5167.262: 0.6307% ( 2) 00:09:11.195 5167.262 - 5192.468: 0.6451% ( 2) 00:09:11.195 5192.468 - 5217.674: 0.6594% ( 2) 00:09:11.195 5217.674 - 5242.880: 0.6737% ( 2) 00:09:11.195 5242.880 - 5268.086: 0.6881% ( 2) 00:09:11.195 5268.086 - 5293.292: 0.7024% ( 2) 00:09:11.195 5293.292 - 5318.498: 0.7167% ( 2) 00:09:11.195 5318.498 - 5343.705: 0.7311% ( 2) 00:09:11.195 5343.705 - 5368.911: 0.7454% ( 2) 00:09:11.195 5368.911 - 5394.117: 0.7597% ( 2) 00:09:11.195 5394.117 - 5419.323: 0.7741% ( 2) 00:09:11.195 5419.323 - 5444.529: 0.7812% ( 1) 00:09:11.195 5444.529 - 5469.735: 0.7956% ( 2) 00:09:11.195 5469.735 - 5494.942: 0.8171% ( 3) 00:09:11.195 5494.942 - 5520.148: 0.8314% ( 2) 00:09:11.195 5520.148 - 5545.354: 0.8458% ( 2) 00:09:11.195 5545.354 - 5570.560: 0.8529% ( 1) 00:09:11.195 5570.560 - 5595.766: 0.8744% ( 3) 00:09:11.195 5595.766 - 5620.972: 0.8816% ( 1) 00:09:11.195 5620.972 - 5646.178: 0.8959% ( 2) 00:09:11.195 5646.178 - 5671.385: 0.9031% ( 1) 00:09:11.195 5671.385 - 5696.591: 0.9174% ( 2) 00:09:11.195 5696.591 - 5721.797: 0.9246% ( 1) 00:09:11.195 5797.415 - 5822.622: 0.9318% ( 1) 00:09:11.195 5822.622 - 5847.828: 0.9389% ( 1) 00:09:11.195 6024.271 - 6049.477: 0.9604% ( 3) 00:09:11.195 6049.477 - 6074.683: 0.9748% ( 2) 00:09:11.195 6074.683 - 6099.889: 1.0178% ( 6) 00:09:11.195 6099.889 - 6125.095: 1.0249% ( 1) 00:09:11.195 6125.095 - 6150.302: 1.0536% ( 4) 00:09:11.195 6150.302 - 6175.508: 1.0823% ( 4) 00:09:11.195 6175.508 - 6200.714: 1.1110% ( 4) 00:09:11.195 6200.714 - 6225.920: 1.1396% ( 4) 00:09:11.195 6225.920 - 6251.126: 1.1611% ( 3) 00:09:11.195 6251.126 - 6276.332: 1.1826% ( 3) 00:09:11.195 6276.332 - 6301.538: 1.2113% ( 4) 00:09:11.195 6301.538 - 6326.745: 1.2328% ( 3) 00:09:11.195 6326.745 - 6351.951: 1.2686% ( 5) 00:09:11.195 6351.951 - 6377.157: 1.3188% ( 7) 00:09:11.195 6377.157 - 6402.363: 1.3690% ( 7) 00:09:11.195 6402.363 - 6427.569: 1.4263% ( 8) 00:09:11.195 6427.569 - 6452.775: 1.4837% ( 8) 00:09:11.195 6452.775 - 6503.188: 1.7704% ( 40) 00:09:11.195 6503.188 - 6553.600: 1.8779% ( 15) 00:09:11.195 6553.600 - 6604.012: 1.9710% ( 13) 00:09:11.195 6604.012 - 6654.425: 2.0642% ( 13) 00:09:11.195 6654.425 - 6704.837: 2.1502% ( 12) 00:09:11.195 6704.837 - 6755.249: 2.2291% ( 11) 00:09:11.195 6755.249 - 6805.662: 2.3007% ( 10) 00:09:11.195 6805.662 - 6856.074: 2.4083% ( 15) 00:09:11.195 6856.074 - 6906.486: 2.4871% ( 11) 00:09:11.195 6906.486 - 6956.898: 2.6161% ( 18) 00:09:11.196 6956.898 - 7007.311: 2.7810% ( 23) 00:09:11.196 7007.311 - 7057.723: 2.9458% ( 23) 00:09:11.196 7057.723 - 7108.135: 3.0748% ( 18) 00:09:11.196 7108.135 - 7158.548: 3.1967% ( 17) 00:09:11.196 7158.548 - 7208.960: 3.3114% ( 16) 00:09:11.196 7208.960 - 7259.372: 3.4619% ( 21) 00:09:11.196 7259.372 - 7309.785: 3.6052% ( 20) 00:09:11.196 7309.785 - 7360.197: 3.8131% ( 29) 00:09:11.196 7360.197 - 7410.609: 4.0711% ( 36) 00:09:11.196 7410.609 - 7461.022: 4.4008% ( 46) 00:09:11.196 7461.022 - 7511.434: 4.7090% ( 43) 00:09:11.196 7511.434 - 7561.846: 5.0315% ( 45) 00:09:11.196 7561.846 - 7612.258: 5.3039% ( 38) 00:09:11.196 7612.258 - 7662.671: 5.8486% ( 76) 00:09:11.196 7662.671 - 7713.083: 6.1927% ( 48) 00:09:11.196 7713.083 - 7763.495: 6.6012% ( 57) 00:09:11.196 7763.495 - 7813.908: 6.9882% ( 54) 00:09:11.196 7813.908 - 7864.320: 7.4541% ( 65) 00:09:11.196 7864.320 - 7914.732: 7.9774% ( 73) 00:09:11.196 7914.732 - 7965.145: 8.6081% ( 88) 00:09:11.196 7965.145 - 8015.557: 9.3893% ( 109) 00:09:11.196 8015.557 - 8065.969: 10.4071% ( 142) 00:09:11.196 8065.969 - 8116.382: 11.3890% ( 137) 00:09:11.196 8116.382 - 8166.794: 12.5932% ( 168) 00:09:11.196 8166.794 - 8217.206: 13.8761% ( 179) 00:09:11.196 8217.206 - 8267.618: 15.4458% ( 219) 00:09:11.196 8267.618 - 8318.031: 17.1015% ( 231) 00:09:11.196 8318.031 - 8368.443: 19.0510% ( 272) 00:09:11.196 8368.443 - 8418.855: 21.1439% ( 292) 00:09:11.196 8418.855 - 8469.268: 23.4017% ( 315) 00:09:11.196 8469.268 - 8519.680: 25.8816% ( 346) 00:09:11.196 8519.680 - 8570.092: 28.2468% ( 330) 00:09:11.196 8570.092 - 8620.505: 30.6264% ( 332) 00:09:11.196 8620.505 - 8670.917: 33.2640% ( 368) 00:09:11.196 8670.917 - 8721.329: 35.8228% ( 357) 00:09:11.196 8721.329 - 8771.742: 38.2454% ( 338) 00:09:11.196 8771.742 - 8822.154: 40.9045% ( 371) 00:09:11.196 8822.154 - 8872.566: 43.9005% ( 418) 00:09:11.196 8872.566 - 8922.978: 46.4235% ( 352) 00:09:11.196 8922.978 - 8973.391: 48.9607% ( 354) 00:09:11.196 8973.391 - 9023.803: 51.4622% ( 349) 00:09:11.196 9023.803 - 9074.215: 53.9851% ( 352) 00:09:11.196 9074.215 - 9124.628: 56.2572% ( 317) 00:09:11.196 9124.628 - 9175.040: 58.5221% ( 316) 00:09:11.196 9175.040 - 9225.452: 60.8228% ( 321) 00:09:11.196 9225.452 - 9275.865: 63.0662% ( 313) 00:09:11.196 9275.865 - 9326.277: 65.2738% ( 308) 00:09:11.196 9326.277 - 9376.689: 67.5100% ( 312) 00:09:11.196 9376.689 - 9427.102: 69.6674% ( 301) 00:09:11.196 9427.102 - 9477.514: 71.7460% ( 290) 00:09:11.196 9477.514 - 9527.926: 73.7457% ( 279) 00:09:11.196 9527.926 - 9578.338: 75.5447% ( 251) 00:09:11.196 9578.338 - 9628.751: 77.4083% ( 260) 00:09:11.196 9628.751 - 9679.163: 79.0496% ( 229) 00:09:11.196 9679.163 - 9729.575: 80.5619% ( 211) 00:09:11.196 9729.575 - 9779.988: 81.9811% ( 198) 00:09:11.196 9779.988 - 9830.400: 83.3501% ( 191) 00:09:11.196 9830.400 - 9880.812: 84.5470% ( 167) 00:09:11.196 9880.812 - 9931.225: 85.6938% ( 160) 00:09:11.196 9931.225 - 9981.637: 86.6972% ( 140) 00:09:11.196 9981.637 - 10032.049: 87.6003% ( 126) 00:09:11.196 10032.049 - 10082.462: 88.4676% ( 121) 00:09:11.196 10082.462 - 10132.874: 89.2058% ( 103) 00:09:11.196 10132.874 - 10183.286: 89.9369% ( 102) 00:09:11.196 10183.286 - 10233.698: 90.6178% ( 95) 00:09:11.196 10233.698 - 10284.111: 91.2127% ( 83) 00:09:11.196 10284.111 - 10334.523: 91.7001% ( 68) 00:09:11.196 10334.523 - 10384.935: 92.1588% ( 64) 00:09:11.196 10384.935 - 10435.348: 92.5889% ( 60) 00:09:11.196 10435.348 - 10485.760: 92.9544% ( 51) 00:09:11.196 10485.760 - 10536.172: 93.2339% ( 39) 00:09:11.196 10536.172 - 10586.585: 93.4991% ( 37) 00:09:11.196 10586.585 - 10636.997: 93.7787% ( 39) 00:09:11.196 10636.997 - 10687.409: 94.0582% ( 39) 00:09:11.196 10687.409 - 10737.822: 94.3807% ( 45) 00:09:11.196 10737.822 - 10788.234: 94.6531% ( 38) 00:09:11.196 10788.234 - 10838.646: 94.8108% ( 22) 00:09:11.196 10838.646 - 10889.058: 95.0258% ( 30) 00:09:11.196 10889.058 - 10939.471: 95.2050% ( 25) 00:09:11.196 10939.471 - 10989.883: 95.3627% ( 22) 00:09:11.196 10989.883 - 11040.295: 95.4917% ( 18) 00:09:11.196 11040.295 - 11090.708: 95.5992% ( 15) 00:09:11.196 11090.708 - 11141.120: 95.7067% ( 15) 00:09:11.196 11141.120 - 11191.532: 95.8071% ( 14) 00:09:11.196 11191.532 - 11241.945: 95.9074% ( 14) 00:09:11.196 11241.945 - 11292.357: 95.9862% ( 11) 00:09:11.196 11292.357 - 11342.769: 96.0579% ( 10) 00:09:11.196 11342.769 - 11393.182: 96.1368% ( 11) 00:09:11.196 11393.182 - 11443.594: 96.2156% ( 11) 00:09:11.196 11443.594 - 11494.006: 96.2658% ( 7) 00:09:11.196 11494.006 - 11544.418: 96.3518% ( 12) 00:09:11.196 11544.418 - 11594.831: 96.4306% ( 11) 00:09:11.196 11594.831 - 11645.243: 96.4880% ( 8) 00:09:11.196 11645.243 - 11695.655: 96.5453% ( 8) 00:09:11.196 11695.655 - 11746.068: 96.6026% ( 8) 00:09:11.196 11746.068 - 11796.480: 96.6600% ( 8) 00:09:11.196 11796.480 - 11846.892: 96.7101% ( 7) 00:09:11.196 11846.892 - 11897.305: 96.7603% ( 7) 00:09:11.196 11897.305 - 11947.717: 96.8177% ( 8) 00:09:11.196 11947.717 - 11998.129: 96.8678% ( 7) 00:09:11.196 11998.129 - 12048.542: 96.9323% ( 9) 00:09:11.196 12048.542 - 12098.954: 97.0040% ( 10) 00:09:11.196 12098.954 - 12149.366: 97.0614% ( 8) 00:09:11.196 12149.366 - 12199.778: 97.1187% ( 8) 00:09:11.196 12199.778 - 12250.191: 97.1545% ( 5) 00:09:11.196 12250.191 - 12300.603: 97.1975% ( 6) 00:09:11.196 12300.603 - 12351.015: 97.2979% ( 14) 00:09:11.196 12351.015 - 12401.428: 97.3767% ( 11) 00:09:11.196 12401.428 - 12451.840: 97.4269% ( 7) 00:09:11.196 12451.840 - 12502.252: 97.4699% ( 6) 00:09:11.196 12502.252 - 12552.665: 97.4986% ( 4) 00:09:11.196 12552.665 - 12603.077: 97.5272% ( 4) 00:09:11.196 12603.077 - 12653.489: 97.5487% ( 3) 00:09:11.196 12653.489 - 12703.902: 97.5774% ( 4) 00:09:11.196 12703.902 - 12754.314: 97.5989% ( 3) 00:09:11.196 12754.314 - 12804.726: 97.6276% ( 4) 00:09:11.196 12804.726 - 12855.138: 97.6634% ( 5) 00:09:11.196 12855.138 - 12905.551: 97.6849% ( 3) 00:09:11.196 12905.551 - 13006.375: 97.7423% ( 8) 00:09:11.196 13006.375 - 13107.200: 97.8068% ( 9) 00:09:11.196 13107.200 - 13208.025: 97.8569% ( 7) 00:09:11.196 13208.025 - 13308.849: 97.9143% ( 8) 00:09:11.196 13308.849 - 13409.674: 97.9644% ( 7) 00:09:11.196 13409.674 - 13510.498: 98.0218% ( 8) 00:09:11.196 13510.498 - 13611.323: 98.0720% ( 7) 00:09:11.196 13611.323 - 13712.148: 98.1221% ( 7) 00:09:11.196 13712.148 - 13812.972: 98.1866% ( 9) 00:09:11.196 13812.972 - 13913.797: 98.2655% ( 11) 00:09:11.196 13913.797 - 14014.622: 98.3228% ( 8) 00:09:11.196 14014.622 - 14115.446: 98.3945% ( 10) 00:09:11.196 14115.446 - 14216.271: 98.4447% ( 7) 00:09:11.196 14216.271 - 14317.095: 98.5020% ( 8) 00:09:11.196 14317.095 - 14417.920: 98.5737% ( 10) 00:09:11.196 14417.920 - 14518.745: 98.6454% ( 10) 00:09:11.196 14518.745 - 14619.569: 98.7170% ( 10) 00:09:11.196 14619.569 - 14720.394: 98.7959% ( 11) 00:09:11.196 14720.394 - 14821.218: 98.8675% ( 10) 00:09:11.196 14821.218 - 14922.043: 98.9321% ( 9) 00:09:11.196 14922.043 - 15022.868: 98.9894% ( 8) 00:09:11.196 15022.868 - 15123.692: 99.0181% ( 4) 00:09:11.196 15123.692 - 15224.517: 99.0467% ( 4) 00:09:11.196 15224.517 - 15325.342: 99.0682% ( 3) 00:09:11.196 15325.342 - 15426.166: 99.0826% ( 2) 00:09:11.196 22080.591 - 22181.415: 99.1112% ( 4) 00:09:11.196 22181.415 - 22282.240: 99.1972% ( 12) 00:09:11.196 22282.240 - 22383.065: 99.2474% ( 7) 00:09:11.196 22383.065 - 22483.889: 99.3048% ( 8) 00:09:11.196 22483.889 - 22584.714: 99.3764% ( 10) 00:09:11.196 22584.714 - 22685.538: 99.5771% ( 28) 00:09:11.196 22685.538 - 22786.363: 99.6130% ( 5) 00:09:11.196 22786.363 - 22887.188: 99.6488% ( 5) 00:09:11.196 22887.188 - 22988.012: 99.6846% ( 5) 00:09:11.196 22988.012 - 23088.837: 99.7205% ( 5) 00:09:11.196 23088.837 - 23189.662: 99.7563% ( 5) 00:09:11.196 23189.662 - 23290.486: 99.7850% ( 4) 00:09:11.196 23290.486 - 23391.311: 99.8065% ( 3) 00:09:11.196 23391.311 - 23492.135: 99.8423% ( 5) 00:09:11.196 23492.135 - 23592.960: 99.8782% ( 5) 00:09:11.196 23592.960 - 23693.785: 99.9283% ( 7) 00:09:11.196 23693.785 - 23794.609: 99.9928% ( 9) 00:09:11.196 23794.609 - 23895.434: 100.0000% ( 1) 00:09:11.196 00:09:11.196 06:34:21 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:11.196 00:09:11.196 real 0m2.452s 00:09:11.196 user 0m2.173s 00:09:11.196 sys 0m0.173s 00:09:11.196 06:34:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:11.196 06:34:21 -- common/autotest_common.sh@10 -- # set +x 00:09:11.196 ************************************ 00:09:11.196 END TEST nvme_perf 00:09:11.196 ************************************ 00:09:11.455 06:34:21 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:11.456 06:34:21 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:11.456 06:34:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:11.456 06:34:21 -- common/autotest_common.sh@10 -- # set +x 00:09:11.456 ************************************ 00:09:11.456 START TEST nvme_hello_world 00:09:11.456 ************************************ 00:09:11.456 06:34:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:11.456 Initializing NVMe Controllers 00:09:11.456 Attached to 0000:00:07.0 00:09:11.456 Namespace ID: 1 size: 5GB 00:09:11.456 Attached to 0000:00:09.0 00:09:11.456 Namespace ID: 1 size: 1GB 00:09:11.456 Attached to 0000:00:06.0 00:09:11.456 Namespace ID: 1 size: 6GB 00:09:11.456 Attached to 0000:00:08.0 00:09:11.456 Namespace ID: 1 size: 4GB 00:09:11.456 Namespace ID: 2 size: 4GB 00:09:11.456 Namespace ID: 3 size: 4GB 00:09:11.456 Initialization complete. 00:09:11.456 INFO: using host memory buffer for IO 00:09:11.456 Hello world! 00:09:11.456 INFO: using host memory buffer for IO 00:09:11.456 Hello world! 00:09:11.456 INFO: using host memory buffer for IO 00:09:11.456 Hello world! 00:09:11.456 INFO: using host memory buffer for IO 00:09:11.456 Hello world! 00:09:11.456 INFO: using host memory buffer for IO 00:09:11.456 Hello world! 00:09:11.456 INFO: using host memory buffer for IO 00:09:11.456 Hello world! 00:09:11.456 00:09:11.456 real 0m0.182s 00:09:11.456 user 0m0.066s 00:09:11.456 sys 0m0.070s 00:09:11.456 06:34:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:11.456 06:34:22 -- common/autotest_common.sh@10 -- # set +x 00:09:11.456 ************************************ 00:09:11.456 END TEST nvme_hello_world 00:09:11.456 ************************************ 00:09:11.456 06:34:22 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:11.456 06:34:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:11.456 06:34:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:11.456 06:34:22 -- common/autotest_common.sh@10 -- # set +x 00:09:11.456 ************************************ 00:09:11.456 START TEST nvme_sgl 00:09:11.456 ************************************ 00:09:11.456 06:34:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:11.716 0000:00:07.0: build_io_request_0 Invalid IO length parameter 00:09:11.716 0000:00:07.0: build_io_request_1 Invalid IO length parameter 00:09:11.716 0000:00:07.0: build_io_request_3 Invalid IO length parameter 00:09:11.716 0000:00:07.0: build_io_request_8 Invalid IO length parameter 00:09:11.716 0000:00:07.0: build_io_request_9 Invalid IO length parameter 00:09:11.716 0000:00:07.0: build_io_request_11 Invalid IO length parameter 00:09:11.716 0000:00:09.0: build_io_request_0 Invalid IO length parameter 00:09:11.716 0000:00:09.0: build_io_request_1 Invalid IO length parameter 00:09:11.716 0000:00:09.0: build_io_request_2 Invalid IO length parameter 00:09:11.716 0000:00:09.0: build_io_request_3 Invalid IO length parameter 00:09:11.716 0000:00:09.0: build_io_request_4 Invalid IO length parameter 00:09:11.716 0000:00:09.0: build_io_request_5 Invalid IO length parameter 00:09:11.716 0000:00:09.0: build_io_request_6 Invalid IO length parameter 00:09:11.716 0000:00:09.0: build_io_request_7 Invalid IO length parameter 00:09:11.716 0000:00:09.0: build_io_request_8 Invalid IO length parameter 00:09:11.716 0000:00:09.0: build_io_request_9 Invalid IO length parameter 00:09:11.716 0000:00:09.0: build_io_request_10 Invalid IO length parameter 00:09:11.716 0000:00:09.0: build_io_request_11 Invalid IO length parameter 00:09:11.716 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:09:11.716 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:09:11.716 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:09:11.716 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:09:11.716 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:09:11.716 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:09:11.716 0000:00:08.0: build_io_request_0 Invalid IO length parameter 00:09:11.716 0000:00:08.0: build_io_request_1 Invalid IO length parameter 00:09:11.716 0000:00:08.0: build_io_request_2 Invalid IO length parameter 00:09:11.716 0000:00:08.0: build_io_request_3 Invalid IO length parameter 00:09:11.716 0000:00:08.0: build_io_request_4 Invalid IO length parameter 00:09:11.716 0000:00:08.0: build_io_request_5 Invalid IO length parameter 00:09:11.716 0000:00:08.0: build_io_request_6 Invalid IO length parameter 00:09:11.716 0000:00:08.0: build_io_request_7 Invalid IO length parameter 00:09:11.716 0000:00:08.0: build_io_request_8 Invalid IO length parameter 00:09:11.716 0000:00:08.0: build_io_request_9 Invalid IO length parameter 00:09:11.716 0000:00:08.0: build_io_request_10 Invalid IO length parameter 00:09:11.716 0000:00:08.0: build_io_request_11 Invalid IO length parameter 00:09:11.716 NVMe Readv/Writev Request test 00:09:11.716 Attached to 0000:00:07.0 00:09:11.716 Attached to 0000:00:09.0 00:09:11.716 Attached to 0000:00:06.0 00:09:11.716 Attached to 0000:00:08.0 00:09:11.716 0000:00:07.0: build_io_request_2 test passed 00:09:11.716 0000:00:07.0: build_io_request_4 test passed 00:09:11.716 0000:00:07.0: build_io_request_5 test passed 00:09:11.716 0000:00:07.0: build_io_request_6 test passed 00:09:11.716 0000:00:07.0: build_io_request_7 test passed 00:09:11.716 0000:00:07.0: build_io_request_10 test passed 00:09:11.716 0000:00:06.0: build_io_request_2 test passed 00:09:11.716 0000:00:06.0: build_io_request_4 test passed 00:09:11.716 0000:00:06.0: build_io_request_5 test passed 00:09:11.716 0000:00:06.0: build_io_request_6 test passed 00:09:11.716 0000:00:06.0: build_io_request_7 test passed 00:09:11.716 0000:00:06.0: build_io_request_10 test passed 00:09:11.716 Cleaning up... 00:09:11.716 00:09:11.716 real 0m0.248s 00:09:11.716 user 0m0.122s 00:09:11.716 sys 0m0.081s 00:09:11.716 06:34:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:11.716 06:34:22 -- common/autotest_common.sh@10 -- # set +x 00:09:11.716 ************************************ 00:09:11.716 END TEST nvme_sgl 00:09:11.716 ************************************ 00:09:11.716 06:34:22 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:11.716 06:34:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:11.716 06:34:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:11.716 06:34:22 -- common/autotest_common.sh@10 -- # set +x 00:09:11.716 ************************************ 00:09:11.716 START TEST nvme_e2edp 00:09:11.716 ************************************ 00:09:11.716 06:34:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:11.977 NVMe Write/Read with End-to-End data protection test 00:09:11.977 Attached to 0000:00:07.0 00:09:11.977 Attached to 0000:00:09.0 00:09:11.977 Attached to 0000:00:06.0 00:09:11.977 Attached to 0000:00:08.0 00:09:11.977 Cleaning up... 00:09:11.977 00:09:11.977 real 0m0.165s 00:09:11.977 user 0m0.054s 00:09:11.977 sys 0m0.073s 00:09:11.977 06:34:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:11.977 ************************************ 00:09:11.977 END TEST nvme_e2edp 00:09:11.977 ************************************ 00:09:11.977 06:34:22 -- common/autotest_common.sh@10 -- # set +x 00:09:11.977 06:34:22 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:11.977 06:34:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:11.977 06:34:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:11.977 06:34:22 -- common/autotest_common.sh@10 -- # set +x 00:09:11.977 ************************************ 00:09:11.977 START TEST nvme_reserve 00:09:11.977 ************************************ 00:09:11.977 06:34:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:12.239 ===================================================== 00:09:12.239 NVMe Controller at PCI bus 0, device 7, function 0 00:09:12.239 ===================================================== 00:09:12.239 Reservations: Not Supported 00:09:12.239 ===================================================== 00:09:12.239 NVMe Controller at PCI bus 0, device 9, function 0 00:09:12.239 ===================================================== 00:09:12.239 Reservations: Not Supported 00:09:12.239 ===================================================== 00:09:12.239 NVMe Controller at PCI bus 0, device 6, function 0 00:09:12.239 ===================================================== 00:09:12.239 Reservations: Not Supported 00:09:12.239 ===================================================== 00:09:12.239 NVMe Controller at PCI bus 0, device 8, function 0 00:09:12.239 ===================================================== 00:09:12.239 Reservations: Not Supported 00:09:12.239 Reservation test passed 00:09:12.239 00:09:12.239 real 0m0.165s 00:09:12.239 user 0m0.042s 00:09:12.239 sys 0m0.078s 00:09:12.239 06:34:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:12.239 06:34:22 -- common/autotest_common.sh@10 -- # set +x 00:09:12.239 ************************************ 00:09:12.239 END TEST nvme_reserve 00:09:12.239 ************************************ 00:09:12.239 06:34:22 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:12.239 06:34:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:12.239 06:34:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:12.239 06:34:22 -- common/autotest_common.sh@10 -- # set +x 00:09:12.239 ************************************ 00:09:12.239 START TEST nvme_err_injection 00:09:12.239 ************************************ 00:09:12.239 06:34:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:12.501 NVMe Error Injection test 00:09:12.501 Attached to 0000:00:07.0 00:09:12.501 Attached to 0000:00:09.0 00:09:12.501 Attached to 0000:00:06.0 00:09:12.501 Attached to 0000:00:08.0 00:09:12.501 0000:00:07.0: get features failed as expected 00:09:12.501 0000:00:09.0: get features failed as expected 00:09:12.501 0000:00:06.0: get features failed as expected 00:09:12.501 0000:00:08.0: get features failed as expected 00:09:12.501 0000:00:09.0: get features successfully as expected 00:09:12.501 0000:00:06.0: get features successfully as expected 00:09:12.501 0000:00:08.0: get features successfully as expected 00:09:12.501 0000:00:07.0: get features successfully as expected 00:09:12.501 0000:00:08.0: read failed as expected 00:09:12.501 0000:00:07.0: read failed as expected 00:09:12.501 0000:00:09.0: read failed as expected 00:09:12.501 0000:00:06.0: read failed as expected 00:09:12.501 0000:00:08.0: read successfully as expected 00:09:12.501 0000:00:07.0: read successfully as expected 00:09:12.501 0000:00:09.0: read successfully as expected 00:09:12.501 0000:00:06.0: read successfully as expected 00:09:12.501 Cleaning up... 00:09:12.501 00:09:12.501 real 0m0.201s 00:09:12.501 user 0m0.067s 00:09:12.501 sys 0m0.081s 00:09:12.501 06:34:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:12.501 ************************************ 00:09:12.501 06:34:23 -- common/autotest_common.sh@10 -- # set +x 00:09:12.501 END TEST nvme_err_injection 00:09:12.502 ************************************ 00:09:12.502 06:34:23 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:12.502 06:34:23 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:09:12.502 06:34:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:12.502 06:34:23 -- common/autotest_common.sh@10 -- # set +x 00:09:12.502 ************************************ 00:09:12.502 START TEST nvme_overhead 00:09:12.502 ************************************ 00:09:12.502 06:34:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:13.889 Initializing NVMe Controllers 00:09:13.889 Attached to 0000:00:07.0 00:09:13.889 Attached to 0000:00:09.0 00:09:13.889 Attached to 0000:00:06.0 00:09:13.889 Attached to 0000:00:08.0 00:09:13.889 Initialization complete. Launching workers. 00:09:13.889 submit (in ns) avg, min, max = 15841.1, 11690.8, 322343.8 00:09:13.889 complete (in ns) avg, min, max = 9520.2, 7350.8, 288805.4 00:09:13.889 00:09:13.889 Submit histogram 00:09:13.889 ================ 00:09:13.889 Range in us Cumulative Count 00:09:13.889 11.668 - 11.717: 0.0273% ( 1) 00:09:13.889 11.963 - 12.012: 0.1091% ( 3) 00:09:13.889 12.012 - 12.062: 0.1364% ( 1) 00:09:13.889 12.062 - 12.111: 0.2183% ( 3) 00:09:13.889 12.111 - 12.160: 0.3001% ( 3) 00:09:13.889 12.160 - 12.209: 0.5184% ( 8) 00:09:13.889 12.209 - 12.258: 0.7640% ( 9) 00:09:13.889 12.258 - 12.308: 1.1460% ( 14) 00:09:13.889 12.308 - 12.357: 1.6098% ( 17) 00:09:13.889 12.357 - 12.406: 2.2920% ( 25) 00:09:13.889 12.406 - 12.455: 3.3288% ( 38) 00:09:13.889 12.455 - 12.505: 3.9563% ( 23) 00:09:13.889 12.505 - 12.554: 4.9113% ( 35) 00:09:13.889 12.554 - 12.603: 5.8663% ( 35) 00:09:13.889 12.603 - 12.702: 8.0491% ( 80) 00:09:13.889 12.702 - 12.800: 10.4775% ( 89) 00:09:13.889 12.800 - 12.898: 12.5512% ( 76) 00:09:13.889 12.898 - 12.997: 14.3793% ( 67) 00:09:13.889 12.997 - 13.095: 15.9891% ( 59) 00:09:13.889 13.095 - 13.194: 17.3806% ( 51) 00:09:13.889 13.194 - 13.292: 18.3629% ( 36) 00:09:13.889 13.292 - 13.391: 19.3179% ( 35) 00:09:13.889 13.391 - 13.489: 20.2729% ( 35) 00:09:13.889 13.489 - 13.588: 21.2005% ( 34) 00:09:13.889 13.588 - 13.686: 22.2920% ( 40) 00:09:13.889 13.686 - 13.785: 23.0286% ( 27) 00:09:13.889 13.785 - 13.883: 23.9018% ( 32) 00:09:13.889 13.883 - 13.982: 24.8840% ( 36) 00:09:13.889 13.982 - 14.080: 25.8936% ( 37) 00:09:13.889 14.080 - 14.178: 27.1214% ( 45) 00:09:13.889 14.178 - 14.277: 28.0764% ( 35) 00:09:13.889 14.277 - 14.375: 29.0041% ( 34) 00:09:13.889 14.375 - 14.474: 29.8226% ( 30) 00:09:13.889 14.474 - 14.572: 30.5048% ( 25) 00:09:13.889 14.572 - 14.671: 31.2415% ( 27) 00:09:13.889 14.671 - 14.769: 31.8963% ( 24) 00:09:13.889 14.769 - 14.868: 32.4966% ( 22) 00:09:13.889 14.868 - 14.966: 33.3424% ( 31) 00:09:13.889 14.966 - 15.065: 34.2701% ( 34) 00:09:13.889 15.065 - 15.163: 35.1705% ( 33) 00:09:13.889 15.163 - 15.262: 36.0709% ( 33) 00:09:13.889 15.262 - 15.360: 37.0259% ( 35) 00:09:13.889 15.360 - 15.458: 38.1992% ( 43) 00:09:13.889 15.458 - 15.557: 39.3452% ( 42) 00:09:13.889 15.557 - 15.655: 41.6917% ( 86) 00:09:13.889 15.655 - 15.754: 45.3479% ( 134) 00:09:13.889 15.754 - 15.852: 49.1405% ( 139) 00:09:13.889 15.852 - 15.951: 53.2060% ( 149) 00:09:13.889 15.951 - 16.049: 59.0996% ( 216) 00:09:13.889 16.049 - 16.148: 65.2933% ( 227) 00:09:13.889 16.148 - 16.246: 70.3411% ( 185) 00:09:13.889 16.246 - 16.345: 74.8977% ( 167) 00:09:13.889 16.345 - 16.443: 79.2360% ( 159) 00:09:13.889 16.443 - 16.542: 82.4557% ( 118) 00:09:13.889 16.542 - 16.640: 84.9659% ( 92) 00:09:13.889 16.640 - 16.738: 86.5211% ( 57) 00:09:13.889 16.738 - 16.837: 87.5580% ( 38) 00:09:13.889 16.837 - 16.935: 88.2947% ( 27) 00:09:13.889 16.935 - 17.034: 88.9495% ( 24) 00:09:13.889 17.034 - 17.132: 89.2497% ( 11) 00:09:13.889 17.132 - 17.231: 89.6862% ( 16) 00:09:13.889 17.231 - 17.329: 89.9864% ( 11) 00:09:13.889 17.329 - 17.428: 90.2592% ( 10) 00:09:13.889 17.428 - 17.526: 90.3683% ( 4) 00:09:13.889 17.526 - 17.625: 90.5866% ( 8) 00:09:13.889 17.625 - 17.723: 90.8049% ( 8) 00:09:13.889 17.723 - 17.822: 90.9959% ( 7) 00:09:13.889 17.822 - 17.920: 91.3506% ( 13) 00:09:13.889 17.920 - 18.018: 91.5416% ( 7) 00:09:13.889 18.018 - 18.117: 91.8963% ( 13) 00:09:13.889 18.117 - 18.215: 92.1965% ( 11) 00:09:13.889 18.215 - 18.314: 92.5512% ( 13) 00:09:13.889 18.314 - 18.412: 93.0423% ( 18) 00:09:13.889 18.412 - 18.511: 93.3970% ( 13) 00:09:13.889 18.511 - 18.609: 93.5880% ( 7) 00:09:13.889 18.609 - 18.708: 93.6971% ( 4) 00:09:13.889 18.708 - 18.806: 93.9154% ( 8) 00:09:13.889 18.806 - 18.905: 94.0246% ( 4) 00:09:13.889 18.905 - 19.003: 94.4065% ( 14) 00:09:13.889 19.003 - 19.102: 94.5430% ( 5) 00:09:13.889 19.102 - 19.200: 94.6521% ( 4) 00:09:13.889 19.200 - 19.298: 94.8431% ( 7) 00:09:13.889 19.298 - 19.397: 94.9250% ( 3) 00:09:13.889 19.397 - 19.495: 95.0068% ( 3) 00:09:13.890 19.495 - 19.594: 95.1160% ( 4) 00:09:13.890 19.594 - 19.692: 95.1978% ( 3) 00:09:13.890 19.692 - 19.791: 95.2524% ( 2) 00:09:13.890 19.791 - 19.889: 95.2797% ( 1) 00:09:13.890 19.889 - 19.988: 95.3615% ( 3) 00:09:13.890 19.988 - 20.086: 95.4434% ( 3) 00:09:13.890 20.086 - 20.185: 95.6071% ( 6) 00:09:13.890 20.185 - 20.283: 95.6617% ( 2) 00:09:13.890 20.283 - 20.382: 95.7981% ( 5) 00:09:13.890 20.382 - 20.480: 95.8254% ( 1) 00:09:13.890 20.480 - 20.578: 95.9891% ( 6) 00:09:13.890 20.578 - 20.677: 96.1255% ( 5) 00:09:13.890 20.677 - 20.775: 96.2892% ( 6) 00:09:13.890 20.775 - 20.874: 96.5075% ( 8) 00:09:13.890 20.874 - 20.972: 96.6166% ( 4) 00:09:13.890 20.972 - 21.071: 96.8076% ( 7) 00:09:13.890 21.071 - 21.169: 96.9714% ( 6) 00:09:13.890 21.169 - 21.268: 97.1623% ( 7) 00:09:13.890 21.268 - 21.366: 97.2988% ( 5) 00:09:13.890 21.366 - 21.465: 97.5171% ( 8) 00:09:13.890 21.465 - 21.563: 97.5443% ( 1) 00:09:13.890 21.563 - 21.662: 97.6808% ( 5) 00:09:13.890 21.662 - 21.760: 97.7353% ( 2) 00:09:13.890 21.760 - 21.858: 97.7626% ( 1) 00:09:13.890 21.858 - 21.957: 97.8172% ( 2) 00:09:13.890 21.957 - 22.055: 97.8718% ( 2) 00:09:13.890 22.055 - 22.154: 97.9536% ( 3) 00:09:13.890 22.154 - 22.252: 97.9809% ( 1) 00:09:13.890 22.252 - 22.351: 98.0082% ( 1) 00:09:13.890 22.351 - 22.449: 98.0355% ( 1) 00:09:13.890 22.449 - 22.548: 98.0900% ( 2) 00:09:13.890 22.548 - 22.646: 98.1719% ( 3) 00:09:13.890 22.745 - 22.843: 98.1992% ( 1) 00:09:13.890 22.942 - 23.040: 98.2810% ( 3) 00:09:13.890 23.040 - 23.138: 98.4175% ( 5) 00:09:13.890 23.138 - 23.237: 98.4993% ( 3) 00:09:13.890 23.237 - 23.335: 98.5266% ( 1) 00:09:13.890 23.335 - 23.434: 98.6085% ( 3) 00:09:13.890 23.434 - 23.532: 98.6903% ( 3) 00:09:13.890 23.532 - 23.631: 98.7449% ( 2) 00:09:13.890 23.631 - 23.729: 98.7995% ( 2) 00:09:13.890 23.729 - 23.828: 98.8813% ( 3) 00:09:13.890 23.828 - 23.926: 98.9905% ( 4) 00:09:13.890 23.926 - 24.025: 99.0723% ( 3) 00:09:13.890 24.025 - 24.123: 99.1269% ( 2) 00:09:13.890 24.222 - 24.320: 99.1542% ( 1) 00:09:13.890 24.320 - 24.418: 99.1814% ( 1) 00:09:13.890 24.418 - 24.517: 99.2360% ( 2) 00:09:13.890 24.517 - 24.615: 99.2633% ( 1) 00:09:13.890 24.615 - 24.714: 99.2906% ( 1) 00:09:13.890 24.714 - 24.812: 99.3452% ( 2) 00:09:13.890 25.108 - 25.206: 99.4270% ( 3) 00:09:13.890 25.206 - 25.403: 99.4816% ( 2) 00:09:13.890 25.403 - 25.600: 99.5089% ( 1) 00:09:13.890 25.797 - 25.994: 99.5634% ( 2) 00:09:13.890 25.994 - 26.191: 99.5907% ( 1) 00:09:13.890 27.175 - 27.372: 99.6180% ( 1) 00:09:13.890 28.948 - 29.145: 99.6453% ( 1) 00:09:13.890 29.145 - 29.342: 99.6726% ( 1) 00:09:13.890 30.720 - 30.917: 99.6999% ( 1) 00:09:13.890 30.917 - 31.114: 99.7271% ( 1) 00:09:13.890 31.902 - 32.098: 99.7544% ( 1) 00:09:13.890 33.477 - 33.674: 99.7817% ( 1) 00:09:13.890 33.674 - 33.871: 99.8090% ( 1) 00:09:13.890 37.022 - 37.218: 99.8363% ( 1) 00:09:13.890 49.034 - 49.231: 99.8636% ( 1) 00:09:13.890 64.591 - 64.985: 99.8909% ( 1) 00:09:13.890 163.052 - 163.840: 99.9181% ( 1) 00:09:13.890 169.354 - 170.142: 99.9454% ( 1) 00:09:13.890 264.665 - 266.240: 99.9727% ( 1) 00:09:13.890 321.378 - 322.954: 100.0000% ( 1) 00:09:13.890 00:09:13.890 Complete histogram 00:09:13.890 ================== 00:09:13.890 Range in us Cumulative Count 00:09:13.890 7.335 - 7.385: 0.1091% ( 4) 00:09:13.890 7.385 - 7.434: 0.8731% ( 28) 00:09:13.890 7.434 - 7.483: 3.2742% ( 88) 00:09:13.890 7.483 - 7.532: 5.1569% ( 69) 00:09:13.890 7.532 - 7.582: 7.2033% ( 75) 00:09:13.890 7.582 - 7.631: 9.7135% ( 92) 00:09:13.890 7.631 - 7.680: 11.6235% ( 70) 00:09:13.890 7.680 - 7.729: 13.2060% ( 58) 00:09:13.890 7.729 - 7.778: 15.4707% ( 83) 00:09:13.890 7.778 - 7.828: 17.5989% ( 78) 00:09:13.890 7.828 - 7.877: 19.1269% ( 56) 00:09:13.890 7.877 - 7.926: 20.3274% ( 44) 00:09:13.890 7.926 - 7.975: 21.3370% ( 37) 00:09:13.890 7.975 - 8.025: 21.7735% ( 16) 00:09:13.890 8.025 - 8.074: 22.2374% ( 17) 00:09:13.890 8.074 - 8.123: 22.5102% ( 10) 00:09:13.890 8.123 - 8.172: 22.6467% ( 5) 00:09:13.890 8.172 - 8.222: 22.8104% ( 6) 00:09:13.890 8.222 - 8.271: 22.9195% ( 4) 00:09:13.890 8.320 - 8.369: 22.9468% ( 1) 00:09:13.890 8.369 - 8.418: 22.9741% ( 1) 00:09:13.890 8.418 - 8.468: 23.0286% ( 2) 00:09:13.890 8.468 - 8.517: 23.0832% ( 2) 00:09:13.890 8.517 - 8.566: 23.3015% ( 8) 00:09:13.890 8.566 - 8.615: 23.5744% ( 10) 00:09:13.890 8.615 - 8.665: 23.9836% ( 15) 00:09:13.890 8.665 - 8.714: 24.5020% ( 19) 00:09:13.890 8.714 - 8.763: 25.5662% ( 39) 00:09:13.890 8.763 - 8.812: 26.3574% ( 29) 00:09:13.890 8.812 - 8.862: 27.7217% ( 50) 00:09:13.890 8.862 - 8.911: 28.7858% ( 39) 00:09:13.890 8.911 - 8.960: 29.7681% ( 36) 00:09:13.890 8.960 - 9.009: 31.2688% ( 55) 00:09:13.890 9.009 - 9.058: 32.3602% ( 40) 00:09:13.890 9.058 - 9.108: 33.5880% ( 45) 00:09:13.890 9.108 - 9.157: 34.9523% ( 50) 00:09:13.890 9.157 - 9.206: 36.1528% ( 44) 00:09:13.890 9.206 - 9.255: 37.4352% ( 47) 00:09:13.890 9.255 - 9.305: 38.4447% ( 37) 00:09:13.890 9.305 - 9.354: 39.6453% ( 44) 00:09:13.890 9.354 - 9.403: 40.6821% ( 38) 00:09:13.890 9.403 - 9.452: 41.6917% ( 37) 00:09:13.890 9.452 - 9.502: 42.9195% ( 45) 00:09:13.890 9.502 - 9.551: 44.0655% ( 42) 00:09:13.890 9.551 - 9.600: 45.9754% ( 70) 00:09:13.890 9.600 - 9.649: 48.1855% ( 81) 00:09:13.890 9.649 - 9.698: 51.1869% ( 110) 00:09:13.890 9.698 - 9.748: 53.6971% ( 92) 00:09:13.890 9.748 - 9.797: 56.2347% ( 93) 00:09:13.890 9.797 - 9.846: 59.3997% ( 116) 00:09:13.890 9.846 - 9.895: 63.3288% ( 144) 00:09:13.890 9.895 - 9.945: 67.5853% ( 156) 00:09:13.890 9.945 - 9.994: 72.2783% ( 172) 00:09:13.890 9.994 - 10.043: 76.9168% ( 170) 00:09:13.890 10.043 - 10.092: 80.9277% ( 147) 00:09:13.890 10.092 - 10.142: 84.7749% ( 141) 00:09:13.890 10.142 - 10.191: 88.3492% ( 131) 00:09:13.890 10.191 - 10.240: 91.0232% ( 98) 00:09:13.890 10.240 - 10.289: 93.2060% ( 80) 00:09:13.890 10.289 - 10.338: 94.6248% ( 52) 00:09:13.890 10.338 - 10.388: 95.6344% ( 37) 00:09:13.890 10.388 - 10.437: 96.3438% ( 26) 00:09:13.890 10.437 - 10.486: 97.0532% ( 26) 00:09:13.890 10.486 - 10.535: 97.2988% ( 9) 00:09:13.890 10.535 - 10.585: 97.4079% ( 4) 00:09:13.890 10.585 - 10.634: 97.5716% ( 6) 00:09:13.890 10.634 - 10.683: 97.5989% ( 1) 00:09:13.890 10.683 - 10.732: 97.6808% ( 3) 00:09:13.890 10.732 - 10.782: 97.8172% ( 5) 00:09:13.890 10.782 - 10.831: 97.8718% ( 2) 00:09:13.890 10.831 - 10.880: 97.8990% ( 1) 00:09:13.890 10.880 - 10.929: 97.9263% ( 1) 00:09:13.890 11.471 - 11.520: 97.9536% ( 1) 00:09:13.890 11.569 - 11.618: 97.9809% ( 1) 00:09:13.890 12.111 - 12.160: 98.0082% ( 1) 00:09:13.890 12.800 - 12.898: 98.0355% ( 1) 00:09:13.890 13.194 - 13.292: 98.0628% ( 1) 00:09:13.890 13.292 - 13.391: 98.1173% ( 2) 00:09:13.890 13.391 - 13.489: 98.1446% ( 1) 00:09:13.890 13.489 - 13.588: 98.2265% ( 3) 00:09:13.890 13.588 - 13.686: 98.3083% ( 3) 00:09:13.890 13.686 - 13.785: 98.3629% ( 2) 00:09:13.890 13.785 - 13.883: 98.3902% ( 1) 00:09:13.890 13.883 - 13.982: 98.4447% ( 2) 00:09:13.890 13.982 - 14.080: 98.4993% ( 2) 00:09:13.890 14.080 - 14.178: 98.5266% ( 1) 00:09:13.890 14.178 - 14.277: 98.5812% ( 2) 00:09:13.890 14.572 - 14.671: 98.6357% ( 2) 00:09:13.890 14.966 - 15.065: 98.6630% ( 1) 00:09:13.890 15.360 - 15.458: 98.7176% ( 2) 00:09:13.890 15.458 - 15.557: 98.7449% ( 1) 00:09:13.890 15.754 - 15.852: 98.7722% ( 1) 00:09:13.890 15.951 - 16.049: 98.8267% ( 2) 00:09:13.890 16.148 - 16.246: 98.8540% ( 1) 00:09:13.890 16.345 - 16.443: 98.8813% ( 1) 00:09:13.890 16.443 - 16.542: 98.9086% ( 1) 00:09:13.890 16.837 - 16.935: 98.9632% ( 2) 00:09:13.890 17.034 - 17.132: 99.0177% ( 2) 00:09:13.890 17.231 - 17.329: 99.0450% ( 1) 00:09:13.890 17.329 - 17.428: 99.0723% ( 1) 00:09:13.890 17.428 - 17.526: 99.1542% ( 3) 00:09:13.890 17.526 - 17.625: 99.1814% ( 1) 00:09:13.890 17.723 - 17.822: 99.2087% ( 1) 00:09:13.890 18.314 - 18.412: 99.2906% ( 3) 00:09:13.890 18.412 - 18.511: 99.3724% ( 3) 00:09:13.890 18.511 - 18.609: 99.3997% ( 1) 00:09:13.890 19.200 - 19.298: 99.4270% ( 1) 00:09:13.890 19.298 - 19.397: 99.4543% ( 1) 00:09:13.890 19.397 - 19.495: 99.5089% ( 2) 00:09:13.890 19.594 - 19.692: 99.5362% ( 1) 00:09:13.890 19.692 - 19.791: 99.5907% ( 2) 00:09:13.890 19.889 - 19.988: 99.6180% ( 1) 00:09:13.890 20.185 - 20.283: 99.6726% ( 2) 00:09:13.890 20.578 - 20.677: 99.6999% ( 1) 00:09:13.891 22.351 - 22.449: 99.7271% ( 1) 00:09:13.891 24.222 - 24.320: 99.7544% ( 1) 00:09:13.891 25.403 - 25.600: 99.7817% ( 1) 00:09:13.891 27.372 - 27.569: 99.8090% ( 1) 00:09:13.891 27.766 - 27.963: 99.8363% ( 1) 00:09:13.891 32.295 - 32.492: 99.8636% ( 1) 00:09:13.891 32.492 - 32.689: 99.8909% ( 1) 00:09:13.891 38.794 - 38.991: 99.9181% ( 1) 00:09:13.891 47.458 - 47.655: 99.9454% ( 1) 00:09:13.891 58.289 - 58.683: 99.9727% ( 1) 00:09:13.891 288.295 - 289.871: 100.0000% ( 1) 00:09:13.891 00:09:13.891 00:09:13.891 real 0m1.177s 00:09:13.891 user 0m1.059s 00:09:13.891 sys 0m0.075s 00:09:13.891 06:34:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:13.891 ************************************ 00:09:13.891 END TEST nvme_overhead 00:09:13.891 ************************************ 00:09:13.891 06:34:24 -- common/autotest_common.sh@10 -- # set +x 00:09:13.891 06:34:24 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:13.891 06:34:24 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:09:13.891 06:34:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:13.891 06:34:24 -- common/autotest_common.sh@10 -- # set +x 00:09:13.891 ************************************ 00:09:13.891 START TEST nvme_arbitration 00:09:13.891 ************************************ 00:09:13.891 06:34:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:17.188 Initializing NVMe Controllers 00:09:17.188 Attached to 0000:00:07.0 00:09:17.188 Attached to 0000:00:09.0 00:09:17.188 Attached to 0000:00:06.0 00:09:17.188 Attached to 0000:00:08.0 00:09:17.188 Associating QEMU NVMe Ctrl (12341 ) with lcore 0 00:09:17.188 Associating QEMU NVMe Ctrl (12343 ) with lcore 1 00:09:17.188 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:09:17.188 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:17.188 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:17.188 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:17.188 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:17.188 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:17.188 Initialization complete. Launching workers. 00:09:17.188 Starting thread on core 1 with urgent priority queue 00:09:17.188 Starting thread on core 2 with urgent priority queue 00:09:17.188 Starting thread on core 3 with urgent priority queue 00:09:17.189 Starting thread on core 0 with urgent priority queue 00:09:17.189 QEMU NVMe Ctrl (12341 ) core 0: 5734.33 IO/s 17.44 secs/100000 ios 00:09:17.189 QEMU NVMe Ctrl (12342 ) core 0: 5738.67 IO/s 17.43 secs/100000 ios 00:09:17.189 QEMU NVMe Ctrl (12343 ) core 1: 5525.33 IO/s 18.10 secs/100000 ios 00:09:17.189 QEMU NVMe Ctrl (12342 ) core 1: 5525.33 IO/s 18.10 secs/100000 ios 00:09:17.189 QEMU NVMe Ctrl (12340 ) core 2: 5312.00 IO/s 18.83 secs/100000 ios 00:09:17.189 QEMU NVMe Ctrl (12342 ) core 3: 5141.33 IO/s 19.45 secs/100000 ios 00:09:17.189 ======================================================== 00:09:17.189 00:09:17.189 00:09:17.189 real 0m3.247s 00:09:17.189 user 0m9.062s 00:09:17.189 sys 0m0.115s 00:09:17.189 06:34:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:17.189 ************************************ 00:09:17.189 END TEST nvme_arbitration 00:09:17.189 ************************************ 00:09:17.189 06:34:27 -- common/autotest_common.sh@10 -- # set +x 00:09:17.189 06:34:27 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:09:17.189 06:34:27 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:17.189 06:34:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:17.189 06:34:27 -- common/autotest_common.sh@10 -- # set +x 00:09:17.189 ************************************ 00:09:17.189 START TEST nvme_single_aen 00:09:17.189 ************************************ 00:09:17.189 06:34:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:09:17.189 [2024-11-28 06:34:27.759792] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:17.189 [2024-11-28 06:34:27.759861] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.189 [2024-11-28 06:34:27.868666] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:09:17.189 [2024-11-28 06:34:27.870818] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:09:17.189 [2024-11-28 06:34:27.873348] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:17.189 [2024-11-28 06:34:27.875191] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:17.189 Asynchronous Event Request test 00:09:17.189 Attached to 0000:00:07.0 00:09:17.189 Attached to 0000:00:09.0 00:09:17.189 Attached to 0000:00:06.0 00:09:17.189 Attached to 0000:00:08.0 00:09:17.189 Reset controller to setup AER completions for this process 00:09:17.189 Registering asynchronous event callbacks... 00:09:17.189 Getting orig temperature thresholds of all controllers 00:09:17.189 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:17.189 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:17.189 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:17.189 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:17.189 Setting all controllers temperature threshold low to trigger AER 00:09:17.189 Waiting for all controllers temperature threshold to be set lower 00:09:17.189 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:17.189 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:09:17.189 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:17.189 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:09:17.189 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:17.189 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:09:17.189 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:17.189 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:09:17.189 Waiting for all controllers to trigger AER and reset threshold 00:09:17.189 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:17.189 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:17.189 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:17.189 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:17.189 Cleaning up... 00:09:17.189 00:09:17.189 real 0m0.175s 00:09:17.189 user 0m0.058s 00:09:17.189 sys 0m0.078s 00:09:17.189 06:34:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:17.189 06:34:27 -- common/autotest_common.sh@10 -- # set +x 00:09:17.189 ************************************ 00:09:17.189 END TEST nvme_single_aen 00:09:17.189 ************************************ 00:09:17.189 06:34:27 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:17.189 06:34:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:17.189 06:34:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:17.189 06:34:27 -- common/autotest_common.sh@10 -- # set +x 00:09:17.451 ************************************ 00:09:17.451 START TEST nvme_doorbell_aers 00:09:17.451 ************************************ 00:09:17.451 06:34:27 -- common/autotest_common.sh@1114 -- # nvme_doorbell_aers 00:09:17.451 06:34:27 -- nvme/nvme.sh@70 -- # bdfs=() 00:09:17.451 06:34:27 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:17.451 06:34:27 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:17.451 06:34:27 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:17.451 06:34:27 -- common/autotest_common.sh@1508 -- # bdfs=() 00:09:17.451 06:34:27 -- common/autotest_common.sh@1508 -- # local bdfs 00:09:17.451 06:34:27 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:17.451 06:34:27 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:17.451 06:34:27 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:09:17.451 06:34:28 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:09:17.451 06:34:28 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:09:17.451 06:34:28 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:17.451 06:34:28 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:09:17.451 [2024-11-28 06:34:28.202121] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75245) is not found. Dropping the request. 00:09:27.447 Executing: test_write_invalid_db 00:09:27.447 Waiting for AER completion... 00:09:27.447 Failure: test_write_invalid_db 00:09:27.447 00:09:27.447 Executing: test_invalid_db_write_overflow_sq 00:09:27.447 Waiting for AER completion... 00:09:27.447 Failure: test_invalid_db_write_overflow_sq 00:09:27.447 00:09:27.447 Executing: test_invalid_db_write_overflow_cq 00:09:27.447 Waiting for AER completion... 00:09:27.447 Failure: test_invalid_db_write_overflow_cq 00:09:27.447 00:09:27.447 06:34:38 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:27.447 06:34:38 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:07.0' 00:09:27.705 [2024-11-28 06:34:38.217731] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75245) is not found. Dropping the request. 00:09:37.675 Executing: test_write_invalid_db 00:09:37.675 Waiting for AER completion... 00:09:37.675 Failure: test_write_invalid_db 00:09:37.675 00:09:37.675 Executing: test_invalid_db_write_overflow_sq 00:09:37.675 Waiting for AER completion... 00:09:37.675 Failure: test_invalid_db_write_overflow_sq 00:09:37.675 00:09:37.675 Executing: test_invalid_db_write_overflow_cq 00:09:37.675 Waiting for AER completion... 00:09:37.675 Failure: test_invalid_db_write_overflow_cq 00:09:37.675 00:09:37.675 06:34:48 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:37.675 06:34:48 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:08.0' 00:09:37.675 [2024-11-28 06:34:48.235784] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75245) is not found. Dropping the request. 00:09:47.669 Executing: test_write_invalid_db 00:09:47.669 Waiting for AER completion... 00:09:47.669 Failure: test_write_invalid_db 00:09:47.669 00:09:47.669 Executing: test_invalid_db_write_overflow_sq 00:09:47.669 Waiting for AER completion... 00:09:47.669 Failure: test_invalid_db_write_overflow_sq 00:09:47.669 00:09:47.669 Executing: test_invalid_db_write_overflow_cq 00:09:47.669 Waiting for AER completion... 00:09:47.669 Failure: test_invalid_db_write_overflow_cq 00:09:47.669 00:09:47.669 06:34:58 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:47.669 06:34:58 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:09.0' 00:09:47.669 [2024-11-28 06:34:58.274636] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75245) is not found. Dropping the request. 00:09:57.642 Executing: test_write_invalid_db 00:09:57.642 Waiting for AER completion... 00:09:57.642 Failure: test_write_invalid_db 00:09:57.642 00:09:57.642 Executing: test_invalid_db_write_overflow_sq 00:09:57.642 Waiting for AER completion... 00:09:57.642 Failure: test_invalid_db_write_overflow_sq 00:09:57.642 00:09:57.642 Executing: test_invalid_db_write_overflow_cq 00:09:57.642 Waiting for AER completion... 00:09:57.642 Failure: test_invalid_db_write_overflow_cq 00:09:57.642 00:09:57.642 00:09:57.642 real 0m40.174s 00:09:57.642 user 0m34.309s 00:09:57.642 sys 0m5.472s 00:09:57.642 06:35:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:57.642 06:35:08 -- common/autotest_common.sh@10 -- # set +x 00:09:57.642 ************************************ 00:09:57.642 END TEST nvme_doorbell_aers 00:09:57.642 ************************************ 00:09:57.642 06:35:08 -- nvme/nvme.sh@97 -- # uname 00:09:57.642 06:35:08 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:57.642 06:35:08 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:09:57.642 06:35:08 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:09:57.642 06:35:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:57.642 06:35:08 -- common/autotest_common.sh@10 -- # set +x 00:09:57.643 ************************************ 00:09:57.643 START TEST nvme_multi_aen 00:09:57.643 ************************************ 00:09:57.643 06:35:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:09:57.643 [2024-11-28 06:35:08.204937] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:57.643 [2024-11-28 06:35:08.204996] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.643 [2024-11-28 06:35:08.309320] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:09:57.643 [2024-11-28 06:35:08.309368] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75245) is not found. Dropping the request. 00:09:57.643 [2024-11-28 06:35:08.309395] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75245) is not found. Dropping the request. 00:09:57.643 [2024-11-28 06:35:08.309404] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75245) is not found. Dropping the request. 00:09:57.643 [2024-11-28 06:35:08.310720] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:09:57.643 [2024-11-28 06:35:08.310740] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75245) is not found. Dropping the request. 00:09:57.643 [2024-11-28 06:35:08.310758] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75245) is not found. Dropping the request. 00:09:57.643 [2024-11-28 06:35:08.310767] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75245) is not found. Dropping the request. 00:09:57.643 [2024-11-28 06:35:08.311807] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:57.643 [2024-11-28 06:35:08.311826] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75245) is not found. Dropping the request. 00:09:57.643 [2024-11-28 06:35:08.311843] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75245) is not found. Dropping the request. 00:09:57.643 [2024-11-28 06:35:08.311850] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75245) is not found. Dropping the request. 00:09:57.643 [2024-11-28 06:35:08.312983] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:57.643 [2024-11-28 06:35:08.313001] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75245) is not found. Dropping the request. 00:09:57.643 [2024-11-28 06:35:08.313017] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75245) is not found. Dropping the request. 00:09:57.643 [2024-11-28 06:35:08.313024] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75245) is not found. Dropping the request. 00:09:57.643 [2024-11-28 06:35:08.322902] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:57.643 Child process pid: 75767 00:09:57.643 [2024-11-28 06:35:08.322997] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.902 [Child] Asynchronous Event Request test 00:09:57.902 [Child] Attached to 0000:00:07.0 00:09:57.902 [Child] Attached to 0000:00:09.0 00:09:57.902 [Child] Attached to 0000:00:06.0 00:09:57.902 [Child] Attached to 0000:00:08.0 00:09:57.902 [Child] Registering asynchronous event callbacks... 00:09:57.902 [Child] Getting orig temperature thresholds of all controllers 00:09:57.902 [Child] 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:57.902 [Child] 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:57.902 [Child] 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:57.902 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:57.902 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:57.902 [Child] 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:57.902 [Child] 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:57.902 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:57.902 [Child] 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:57.902 [Child] 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:57.902 [Child] 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:57.902 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:57.902 [Child] 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:57.902 [Child] Cleaning up... 00:09:57.902 Asynchronous Event Request test 00:09:57.902 Attached to 0000:00:07.0 00:09:57.902 Attached to 0000:00:09.0 00:09:57.902 Attached to 0000:00:06.0 00:09:57.902 Attached to 0000:00:08.0 00:09:57.902 Reset controller to setup AER completions for this process 00:09:57.902 Registering asynchronous event callbacks... 00:09:57.902 Getting orig temperature thresholds of all controllers 00:09:57.902 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:57.902 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:57.902 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:57.902 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:57.902 Setting all controllers temperature threshold low to trigger AER 00:09:57.902 Waiting for all controllers temperature threshold to be set lower 00:09:57.902 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:57.902 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:09:57.902 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:57.902 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:09:57.902 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:57.902 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:09:57.902 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:57.902 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:09:57.902 Waiting for all controllers to trigger AER and reset threshold 00:09:57.902 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:57.902 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:57.902 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:57.902 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:57.902 Cleaning up... 00:09:57.902 00:09:57.902 real 0m0.315s 00:09:57.902 user 0m0.097s 00:09:57.902 sys 0m0.132s 00:09:57.902 06:35:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:57.902 06:35:08 -- common/autotest_common.sh@10 -- # set +x 00:09:57.902 ************************************ 00:09:57.902 END TEST nvme_multi_aen 00:09:57.902 ************************************ 00:09:57.902 06:35:08 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:57.902 06:35:08 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:57.902 06:35:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:57.902 06:35:08 -- common/autotest_common.sh@10 -- # set +x 00:09:57.902 ************************************ 00:09:57.902 START TEST nvme_startup 00:09:57.902 ************************************ 00:09:57.902 06:35:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:58.161 Initializing NVMe Controllers 00:09:58.161 Attached to 0000:00:07.0 00:09:58.161 Attached to 0000:00:09.0 00:09:58.161 Attached to 0000:00:06.0 00:09:58.161 Attached to 0000:00:08.0 00:09:58.161 Initialization complete. 00:09:58.161 Time used:102012.289 (us). 00:09:58.161 00:09:58.161 real 0m0.150s 00:09:58.161 user 0m0.049s 00:09:58.161 sys 0m0.066s 00:09:58.161 06:35:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:58.161 06:35:08 -- common/autotest_common.sh@10 -- # set +x 00:09:58.161 ************************************ 00:09:58.161 END TEST nvme_startup 00:09:58.161 ************************************ 00:09:58.161 06:35:08 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:58.161 06:35:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:58.161 06:35:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:58.161 06:35:08 -- common/autotest_common.sh@10 -- # set +x 00:09:58.161 ************************************ 00:09:58.161 START TEST nvme_multi_secondary 00:09:58.161 ************************************ 00:09:58.161 06:35:08 -- common/autotest_common.sh@1114 -- # nvme_multi_secondary 00:09:58.161 06:35:08 -- nvme/nvme.sh@52 -- # pid0=75818 00:09:58.161 06:35:08 -- nvme/nvme.sh@54 -- # pid1=75819 00:09:58.161 06:35:08 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:58.161 06:35:08 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:58.161 06:35:08 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:01.448 Initializing NVMe Controllers 00:10:01.448 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:01.448 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:01.448 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:01.448 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:01.448 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:10:01.448 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:10:01.448 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:10:01.448 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:10:01.448 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:10:01.448 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:10:01.448 Initialization complete. Launching workers. 00:10:01.448 ======================================================== 00:10:01.448 Latency(us) 00:10:01.448 Device Information : IOPS MiB/s Average min max 00:10:01.448 PCIE (0000:00:07.0) NSID 1 from core 1: 7461.01 29.14 2144.00 914.50 5880.16 00:10:01.448 PCIE (0000:00:09.0) NSID 1 from core 1: 7461.01 29.14 2144.04 902.57 7029.85 00:10:01.448 PCIE (0000:00:06.0) NSID 1 from core 1: 7461.01 29.14 2143.10 867.20 7021.46 00:10:01.448 PCIE (0000:00:08.0) NSID 1 from core 1: 7461.01 29.14 2144.02 902.28 6563.81 00:10:01.448 PCIE (0000:00:08.0) NSID 2 from core 1: 7461.01 29.14 2143.97 915.69 5679.44 00:10:01.448 PCIE (0000:00:08.0) NSID 3 from core 1: 7461.01 29.14 2144.04 910.55 5591.36 00:10:01.448 ======================================================== 00:10:01.448 Total : 44766.08 174.87 2143.86 867.20 7029.85 00:10:01.448 00:10:01.448 Initializing NVMe Controllers 00:10:01.448 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:01.448 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:01.448 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:01.448 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:01.448 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:10:01.448 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:10:01.448 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:10:01.448 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:10:01.448 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:10:01.448 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:10:01.448 Initialization complete. Launching workers. 00:10:01.448 ======================================================== 00:10:01.448 Latency(us) 00:10:01.448 Device Information : IOPS MiB/s Average min max 00:10:01.448 PCIE (0000:00:07.0) NSID 1 from core 2: 3191.40 12.47 5012.52 1286.19 16487.63 00:10:01.448 PCIE (0000:00:09.0) NSID 1 from core 2: 3191.40 12.47 5013.43 1366.84 12809.35 00:10:01.448 PCIE (0000:00:06.0) NSID 1 from core 2: 3191.40 12.47 5011.69 1308.81 12659.08 00:10:01.448 PCIE (0000:00:08.0) NSID 1 from core 2: 3191.40 12.47 5013.17 1240.16 12692.38 00:10:01.448 PCIE (0000:00:08.0) NSID 2 from core 2: 3191.40 12.47 5013.07 1241.82 16146.82 00:10:01.448 PCIE (0000:00:08.0) NSID 3 from core 2: 3191.40 12.47 5012.44 1198.77 16152.11 00:10:01.448 ======================================================== 00:10:01.448 Total : 19148.40 74.80 5012.72 1198.77 16487.63 00:10:01.448 00:10:01.448 06:35:12 -- nvme/nvme.sh@56 -- # wait 75818 00:10:03.348 Initializing NVMe Controllers 00:10:03.348 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:03.348 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:03.348 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:03.348 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:03.348 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:03.348 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:03.348 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:03.348 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:03.348 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:03.348 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:03.348 Initialization complete. Launching workers. 00:10:03.348 ======================================================== 00:10:03.348 Latency(us) 00:10:03.348 Device Information : IOPS MiB/s Average min max 00:10:03.348 PCIE (0000:00:07.0) NSID 1 from core 0: 10530.10 41.13 1519.07 740.32 6090.30 00:10:03.348 PCIE (0000:00:09.0) NSID 1 from core 0: 10530.10 41.13 1519.07 739.12 8848.26 00:10:03.348 PCIE (0000:00:06.0) NSID 1 from core 0: 10530.10 41.13 1518.27 684.93 8334.88 00:10:03.348 PCIE (0000:00:08.0) NSID 1 from core 0: 10530.10 41.13 1519.04 636.76 7266.43 00:10:03.348 PCIE (0000:00:08.0) NSID 2 from core 0: 10530.10 41.13 1519.02 483.22 6654.85 00:10:03.348 PCIE (0000:00:08.0) NSID 3 from core 0: 10530.10 41.13 1519.00 426.49 6364.93 00:10:03.348 ======================================================== 00:10:03.348 Total : 63180.62 246.80 1518.91 426.49 8848.26 00:10:03.348 00:10:03.348 06:35:13 -- nvme/nvme.sh@57 -- # wait 75819 00:10:03.348 06:35:13 -- nvme/nvme.sh@61 -- # pid0=75888 00:10:03.348 06:35:13 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:03.348 06:35:13 -- nvme/nvme.sh@63 -- # pid1=75889 00:10:03.348 06:35:13 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:03.348 06:35:13 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:06.627 Initializing NVMe Controllers 00:10:06.627 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:06.627 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:06.627 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:06.627 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:06.627 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:06.627 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:06.627 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:06.627 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:06.627 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:06.627 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:06.627 Initialization complete. Launching workers. 00:10:06.627 ======================================================== 00:10:06.627 Latency(us) 00:10:06.627 Device Information : IOPS MiB/s Average min max 00:10:06.627 PCIE (0000:00:07.0) NSID 1 from core 0: 7497.23 29.29 2133.69 760.60 8783.55 00:10:06.627 PCIE (0000:00:09.0) NSID 1 from core 0: 7497.23 29.29 2133.65 767.55 9004.66 00:10:06.627 PCIE (0000:00:06.0) NSID 1 from core 0: 7497.23 29.29 2132.77 746.91 9352.86 00:10:06.627 PCIE (0000:00:08.0) NSID 1 from core 0: 7497.23 29.29 2133.72 764.35 9797.36 00:10:06.627 PCIE (0000:00:08.0) NSID 2 from core 0: 7497.23 29.29 2133.70 762.12 10633.41 00:10:06.627 PCIE (0000:00:08.0) NSID 3 from core 0: 7497.23 29.29 2133.70 768.02 9542.46 00:10:06.627 ======================================================== 00:10:06.627 Total : 44983.39 175.72 2133.54 746.91 10633.41 00:10:06.627 00:10:06.627 Initializing NVMe Controllers 00:10:06.627 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:06.627 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:06.627 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:06.627 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:06.627 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:10:06.627 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:10:06.627 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:10:06.627 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:10:06.627 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:10:06.627 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:10:06.627 Initialization complete. Launching workers. 00:10:06.627 ======================================================== 00:10:06.627 Latency(us) 00:10:06.627 Device Information : IOPS MiB/s Average min max 00:10:06.627 PCIE (0000:00:07.0) NSID 1 from core 1: 7381.27 28.83 2167.12 734.54 6538.94 00:10:06.627 PCIE (0000:00:09.0) NSID 1 from core 1: 7381.27 28.83 2167.18 739.69 7129.77 00:10:06.627 PCIE (0000:00:06.0) NSID 1 from core 1: 7381.27 28.83 2166.16 714.53 7169.46 00:10:06.627 PCIE (0000:00:08.0) NSID 1 from core 1: 7381.27 28.83 2167.16 731.51 7162.53 00:10:06.627 PCIE (0000:00:08.0) NSID 2 from core 1: 7381.27 28.83 2167.12 742.52 6312.26 00:10:06.627 PCIE (0000:00:08.0) NSID 3 from core 1: 7381.27 28.83 2167.18 742.48 6524.44 00:10:06.627 ======================================================== 00:10:06.627 Total : 44287.62 173.00 2166.99 714.53 7169.46 00:10:06.627 00:10:08.527 Initializing NVMe Controllers 00:10:08.527 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:08.528 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:08.528 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:08.528 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:08.528 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:10:08.528 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:10:08.528 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:10:08.528 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:10:08.528 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:10:08.528 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:10:08.528 Initialization complete. Launching workers. 00:10:08.528 ======================================================== 00:10:08.528 Latency(us) 00:10:08.528 Device Information : IOPS MiB/s Average min max 00:10:08.528 PCIE (0000:00:07.0) NSID 1 from core 2: 4703.76 18.37 3400.60 745.98 14033.58 00:10:08.528 PCIE (0000:00:09.0) NSID 1 from core 2: 4703.76 18.37 3401.04 754.30 14204.81 00:10:08.528 PCIE (0000:00:06.0) NSID 1 from core 2: 4703.76 18.37 3399.62 725.83 13259.03 00:10:08.528 PCIE (0000:00:08.0) NSID 1 from core 2: 4703.76 18.37 3400.67 731.76 13199.54 00:10:08.528 PCIE (0000:00:08.0) NSID 2 from core 2: 4703.76 18.37 3400.55 687.91 14652.83 00:10:08.528 PCIE (0000:00:08.0) NSID 3 from core 2: 4706.95 18.39 3398.29 656.16 14185.19 00:10:08.528 ======================================================== 00:10:08.528 Total : 28225.73 110.26 3400.13 656.16 14652.83 00:10:08.528 00:10:08.786 06:35:19 -- nvme/nvme.sh@65 -- # wait 75888 00:10:08.786 06:35:19 -- nvme/nvme.sh@66 -- # wait 75889 00:10:08.786 00:10:08.786 real 0m10.581s 00:10:08.786 user 0m18.262s 00:10:08.786 sys 0m0.550s 00:10:08.786 06:35:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:08.786 06:35:19 -- common/autotest_common.sh@10 -- # set +x 00:10:08.786 ************************************ 00:10:08.786 END TEST nvme_multi_secondary 00:10:08.786 ************************************ 00:10:08.786 06:35:19 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:08.786 06:35:19 -- nvme/nvme.sh@102 -- # kill_stub 00:10:08.786 06:35:19 -- common/autotest_common.sh@1075 -- # [[ -e /proc/74833 ]] 00:10:08.786 06:35:19 -- common/autotest_common.sh@1076 -- # kill 74833 00:10:08.786 06:35:19 -- common/autotest_common.sh@1077 -- # wait 74833 00:10:09.721 [2024-11-28 06:35:20.252886] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75766) is not found. Dropping the request. 00:10:09.721 [2024-11-28 06:35:20.253009] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75766) is not found. Dropping the request. 00:10:09.721 [2024-11-28 06:35:20.253042] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75766) is not found. Dropping the request. 00:10:09.721 [2024-11-28 06:35:20.253073] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75766) is not found. Dropping the request. 00:10:10.290 [2024-11-28 06:35:20.751840] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75766) is not found. Dropping the request. 00:10:10.290 [2024-11-28 06:35:20.751955] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75766) is not found. Dropping the request. 00:10:10.290 [2024-11-28 06:35:20.751989] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75766) is not found. Dropping the request. 00:10:10.290 [2024-11-28 06:35:20.752020] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75766) is not found. Dropping the request. 00:10:10.552 [2024-11-28 06:35:21.259045] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75766) is not found. Dropping the request. 00:10:10.552 [2024-11-28 06:35:21.259149] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75766) is not found. Dropping the request. 00:10:10.552 [2024-11-28 06:35:21.259179] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75766) is not found. Dropping the request. 00:10:10.552 [2024-11-28 06:35:21.259213] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75766) is not found. Dropping the request. 00:10:12.462 [2024-11-28 06:35:22.762458] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75766) is not found. Dropping the request. 00:10:12.462 [2024-11-28 06:35:22.762551] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75766) is not found. Dropping the request. 00:10:12.462 [2024-11-28 06:35:22.762579] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75766) is not found. Dropping the request. 00:10:12.462 [2024-11-28 06:35:22.762602] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75766) is not found. Dropping the request. 00:10:12.462 06:35:22 -- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0 00:10:12.462 06:35:22 -- common/autotest_common.sh@1083 -- # echo 2 00:10:12.462 06:35:22 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:12.462 06:35:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:12.462 06:35:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:12.462 06:35:22 -- common/autotest_common.sh@10 -- # set +x 00:10:12.462 ************************************ 00:10:12.462 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:12.462 ************************************ 00:10:12.462 06:35:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:12.462 * Looking for test storage... 00:10:12.462 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:12.462 06:35:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:12.462 06:35:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:12.462 06:35:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:12.462 06:35:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:12.462 06:35:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:12.462 06:35:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:12.462 06:35:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:12.462 06:35:22 -- scripts/common.sh@335 -- # IFS=.-: 00:10:12.462 06:35:22 -- scripts/common.sh@335 -- # read -ra ver1 00:10:12.462 06:35:22 -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.462 06:35:22 -- scripts/common.sh@336 -- # read -ra ver2 00:10:12.462 06:35:22 -- scripts/common.sh@337 -- # local 'op=<' 00:10:12.462 06:35:22 -- scripts/common.sh@339 -- # ver1_l=2 00:10:12.462 06:35:22 -- scripts/common.sh@340 -- # ver2_l=1 00:10:12.462 06:35:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:12.462 06:35:22 -- scripts/common.sh@343 -- # case "$op" in 00:10:12.462 06:35:22 -- scripts/common.sh@344 -- # : 1 00:10:12.462 06:35:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:12.462 06:35:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.462 06:35:22 -- scripts/common.sh@364 -- # decimal 1 00:10:12.462 06:35:22 -- scripts/common.sh@352 -- # local d=1 00:10:12.462 06:35:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.462 06:35:22 -- scripts/common.sh@354 -- # echo 1 00:10:12.462 06:35:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:12.462 06:35:22 -- scripts/common.sh@365 -- # decimal 2 00:10:12.462 06:35:22 -- scripts/common.sh@352 -- # local d=2 00:10:12.462 06:35:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.462 06:35:22 -- scripts/common.sh@354 -- # echo 2 00:10:12.462 06:35:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:12.462 06:35:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:12.462 06:35:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:12.462 06:35:22 -- scripts/common.sh@367 -- # return 0 00:10:12.462 06:35:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.462 06:35:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:12.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.462 --rc genhtml_branch_coverage=1 00:10:12.462 --rc genhtml_function_coverage=1 00:10:12.462 --rc genhtml_legend=1 00:10:12.462 --rc geninfo_all_blocks=1 00:10:12.463 --rc geninfo_unexecuted_blocks=1 00:10:12.463 00:10:12.463 ' 00:10:12.463 06:35:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:12.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.463 --rc genhtml_branch_coverage=1 00:10:12.463 --rc genhtml_function_coverage=1 00:10:12.463 --rc genhtml_legend=1 00:10:12.463 --rc geninfo_all_blocks=1 00:10:12.463 --rc geninfo_unexecuted_blocks=1 00:10:12.463 00:10:12.463 ' 00:10:12.463 06:35:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:12.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.463 --rc genhtml_branch_coverage=1 00:10:12.463 --rc genhtml_function_coverage=1 00:10:12.463 --rc genhtml_legend=1 00:10:12.463 --rc geninfo_all_blocks=1 00:10:12.463 --rc geninfo_unexecuted_blocks=1 00:10:12.463 00:10:12.463 ' 00:10:12.463 06:35:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:12.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.463 --rc genhtml_branch_coverage=1 00:10:12.463 --rc genhtml_function_coverage=1 00:10:12.463 --rc genhtml_legend=1 00:10:12.463 --rc geninfo_all_blocks=1 00:10:12.463 --rc geninfo_unexecuted_blocks=1 00:10:12.463 00:10:12.463 ' 00:10:12.463 06:35:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:12.463 06:35:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:12.463 06:35:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:12.463 06:35:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:12.463 06:35:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:12.463 06:35:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:12.463 06:35:22 -- common/autotest_common.sh@1519 -- # bdfs=() 00:10:12.463 06:35:22 -- common/autotest_common.sh@1519 -- # local bdfs 00:10:12.463 06:35:22 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:10:12.463 06:35:22 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:10:12.463 06:35:22 -- common/autotest_common.sh@1508 -- # bdfs=() 00:10:12.463 06:35:22 -- common/autotest_common.sh@1508 -- # local bdfs 00:10:12.463 06:35:22 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:12.463 06:35:22 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:12.463 06:35:22 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:10:12.463 06:35:23 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:10:12.463 06:35:23 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:12.463 06:35:23 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:10:12.463 06:35:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:10:12.463 06:35:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:10:12.463 06:35:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=76087 00:10:12.463 06:35:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:12.463 06:35:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:12.463 06:35:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 76087 00:10:12.463 06:35:23 -- common/autotest_common.sh@829 -- # '[' -z 76087 ']' 00:10:12.463 06:35:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.463 06:35:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:12.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.463 06:35:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.463 06:35:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:12.463 06:35:23 -- common/autotest_common.sh@10 -- # set +x 00:10:12.463 [2024-11-28 06:35:23.100210] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:12.463 [2024-11-28 06:35:23.100315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76087 ] 00:10:12.722 [2024-11-28 06:35:23.245831] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:12.722 [2024-11-28 06:35:23.277782] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:12.722 [2024-11-28 06:35:23.278207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.722 [2024-11-28 06:35:23.278551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.722 [2024-11-28 06:35:23.278667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.722 [2024-11-28 06:35:23.278786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:13.299 06:35:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:13.299 06:35:23 -- common/autotest_common.sh@862 -- # return 0 00:10:13.299 06:35:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:10:13.299 06:35:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.299 06:35:23 -- common/autotest_common.sh@10 -- # set +x 00:10:13.299 nvme0n1 00:10:13.299 06:35:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.299 06:35:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:13.299 06:35:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_UBenz.txt 00:10:13.299 06:35:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:13.299 06:35:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.299 06:35:23 -- common/autotest_common.sh@10 -- # set +x 00:10:13.299 true 00:10:13.299 06:35:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.299 06:35:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:13.299 06:35:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732775723 00:10:13.299 06:35:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=76110 00:10:13.299 06:35:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:13.299 06:35:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:13.299 06:35:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:15.224 06:35:25 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:15.224 06:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.225 06:35:25 -- common/autotest_common.sh@10 -- # set +x 00:10:15.225 [2024-11-28 06:35:25.994206] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:15.225 [2024-11-28 06:35:25.994762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:15.483 [2024-11-28 06:35:25.994806] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:15.483 [2024-11-28 06:35:25.994821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.483 [2024-11-28 06:35:25.996491] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:15.483 06:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.483 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 76110 00:10:15.483 06:35:25 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 76110 00:10:15.483 06:35:25 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 76110 00:10:15.483 06:35:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:15.483 06:35:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=3 00:10:15.483 06:35:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:15.483 06:35:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.483 06:35:26 -- common/autotest_common.sh@10 -- # set +x 00:10:15.483 06:35:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.483 06:35:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:15.483 06:35:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_UBenz.txt 00:10:15.483 06:35:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:15.483 06:35:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:15.483 06:35:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:15.483 06:35:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:15.483 06:35:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:15.483 06:35:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:15.483 06:35:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:15.483 06:35:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:15.483 06:35:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:15.483 06:35:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:15.483 06:35:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:15.483 06:35:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:15.483 06:35:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:15.483 06:35:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:15.483 06:35:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:15.483 06:35:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:15.483 06:35:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:15.483 06:35:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:15.483 06:35:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:15.483 06:35:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_UBenz.txt 00:10:15.483 06:35:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 76087 00:10:15.483 06:35:26 -- common/autotest_common.sh@936 -- # '[' -z 76087 ']' 00:10:15.483 06:35:26 -- common/autotest_common.sh@940 -- # kill -0 76087 00:10:15.483 06:35:26 -- common/autotest_common.sh@941 -- # uname 00:10:15.483 06:35:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:15.483 06:35:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76087 00:10:15.483 06:35:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:15.483 06:35:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:15.483 06:35:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76087' 00:10:15.483 killing process with pid 76087 00:10:15.483 06:35:26 -- common/autotest_common.sh@955 -- # kill 76087 00:10:15.483 06:35:26 -- common/autotest_common.sh@960 -- # wait 76087 00:10:15.744 06:35:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:15.744 06:35:26 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:15.744 00:10:15.744 real 0m3.493s 00:10:15.744 user 0m12.485s 00:10:15.744 sys 0m0.440s 00:10:15.744 06:35:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:15.744 06:35:26 -- common/autotest_common.sh@10 -- # set +x 00:10:15.744 ************************************ 00:10:15.744 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:15.744 ************************************ 00:10:15.744 06:35:26 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:15.744 06:35:26 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:15.744 06:35:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:15.744 06:35:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:15.744 06:35:26 -- common/autotest_common.sh@10 -- # set +x 00:10:15.744 ************************************ 00:10:15.744 START TEST nvme_fio 00:10:15.744 ************************************ 00:10:15.744 06:35:26 -- common/autotest_common.sh@1114 -- # nvme_fio_test 00:10:15.744 06:35:26 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:15.744 06:35:26 -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:15.744 06:35:26 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:15.744 06:35:26 -- common/autotest_common.sh@1508 -- # bdfs=() 00:10:15.744 06:35:26 -- common/autotest_common.sh@1508 -- # local bdfs 00:10:15.744 06:35:26 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:15.744 06:35:26 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:15.744 06:35:26 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:10:15.744 06:35:26 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:10:15.744 06:35:26 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:15.744 06:35:26 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0' '0000:00:07.0' '0000:00:08.0' '0000:00:09.0') 00:10:15.744 06:35:26 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:15.744 06:35:26 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:15.744 06:35:26 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:10:15.744 06:35:26 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:16.005 06:35:26 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:16.005 06:35:26 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:10:16.265 06:35:26 -- nvme/nvme.sh@41 -- # bs=4096 00:10:16.265 06:35:26 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:16.266 06:35:26 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:16.266 06:35:26 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:16.266 06:35:26 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:16.266 06:35:26 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:16.266 06:35:26 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:16.266 06:35:26 -- common/autotest_common.sh@1330 -- # shift 00:10:16.266 06:35:26 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:16.266 06:35:26 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:16.266 06:35:26 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:16.266 06:35:26 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:16.266 06:35:26 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:16.266 06:35:26 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:16.266 06:35:26 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:16.266 06:35:26 -- common/autotest_common.sh@1336 -- # break 00:10:16.266 06:35:26 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:16.266 06:35:26 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:10:16.266 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:16.266 fio-3.35 00:10:16.266 Starting 1 thread 00:10:22.851 00:10:22.851 test: (groupid=0, jobs=1): err= 0: pid=76235: Thu Nov 28 06:35:33 2024 00:10:22.851 read: IOPS=23.6k, BW=92.0MiB/s (96.5MB/s)(184MiB/2001msec) 00:10:22.851 slat (usec): min=3, max=100, avg= 4.95, stdev= 2.32 00:10:22.851 clat (usec): min=580, max=11201, avg=2708.28, stdev=873.87 00:10:22.851 lat (usec): min=584, max=11266, avg=2713.23, stdev=875.33 00:10:22.851 clat percentiles (usec): 00:10:22.851 | 1.00th=[ 1631], 5.00th=[ 2147], 10.00th=[ 2245], 20.00th=[ 2278], 00:10:22.851 | 30.00th=[ 2343], 40.00th=[ 2409], 50.00th=[ 2507], 60.00th=[ 2540], 00:10:22.851 | 70.00th=[ 2606], 80.00th=[ 2671], 90.00th=[ 3458], 95.00th=[ 4817], 00:10:22.851 | 99.00th=[ 6652], 99.50th=[ 6783], 99.90th=[ 7373], 99.95th=[ 9110], 00:10:22.851 | 99.99th=[11076] 00:10:22.851 bw ( KiB/s): min=90224, max=100896, per=100.00%, avg=95482.67, stdev=5337.68, samples=3 00:10:22.851 iops : min=22556, max=25224, avg=23870.67, stdev=1334.42, samples=3 00:10:22.851 write: IOPS=23.4k, BW=91.4MiB/s (95.8MB/s)(183MiB/2001msec); 0 zone resets 00:10:22.851 slat (nsec): min=3440, max=95357, avg=5217.32, stdev=2290.50 00:10:22.851 clat (usec): min=571, max=11134, avg=2725.96, stdev=894.27 00:10:22.851 lat (usec): min=576, max=11149, avg=2731.18, stdev=895.75 00:10:22.851 clat percentiles (usec): 00:10:22.851 | 1.00th=[ 1696], 5.00th=[ 2180], 10.00th=[ 2245], 20.00th=[ 2278], 00:10:22.851 | 30.00th=[ 2343], 40.00th=[ 2409], 50.00th=[ 2507], 60.00th=[ 2573], 00:10:22.851 | 70.00th=[ 2606], 80.00th=[ 2704], 90.00th=[ 3523], 95.00th=[ 4948], 00:10:22.851 | 99.00th=[ 6652], 99.50th=[ 6849], 99.90th=[ 7373], 99.95th=[ 9503], 00:10:22.851 | 99.99th=[10945] 00:10:22.851 bw ( KiB/s): min=90896, max=100672, per=100.00%, avg=95488.00, stdev=4914.81, samples=3 00:10:22.851 iops : min=22724, max=25168, avg=23872.00, stdev=1228.70, samples=3 00:10:22.851 lat (usec) : 750=0.03%, 1000=0.05% 00:10:22.851 lat (msec) : 2=2.35%, 4=90.35%, 10=7.18%, 20=0.04% 00:10:22.851 cpu : usr=99.30%, sys=0.00%, ctx=5, majf=0, minf=626 00:10:22.851 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:22.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:22.851 issued rwts: total=47136,46805,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.851 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:22.851 00:10:22.851 Run status group 0 (all jobs): 00:10:22.851 READ: bw=92.0MiB/s (96.5MB/s), 92.0MiB/s-92.0MiB/s (96.5MB/s-96.5MB/s), io=184MiB (193MB), run=2001-2001msec 00:10:22.851 WRITE: bw=91.4MiB/s (95.8MB/s), 91.4MiB/s-91.4MiB/s (95.8MB/s-95.8MB/s), io=183MiB (192MB), run=2001-2001msec 00:10:23.112 ----------------------------------------------------- 00:10:23.112 Suppressions used: 00:10:23.112 count bytes template 00:10:23.112 1 32 /usr/src/fio/parse.c 00:10:23.112 1 8 libtcmalloc_minimal.so 00:10:23.112 ----------------------------------------------------- 00:10:23.112 00:10:23.112 06:35:33 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:23.112 06:35:33 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:23.112 06:35:33 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:23.112 06:35:33 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:10:23.372 06:35:33 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:10:23.373 06:35:33 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:23.633 06:35:34 -- nvme/nvme.sh@41 -- # bs=4096 00:10:23.633 06:35:34 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:23.633 06:35:34 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:23.633 06:35:34 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:23.633 06:35:34 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:23.633 06:35:34 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:23.633 06:35:34 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:23.633 06:35:34 -- common/autotest_common.sh@1330 -- # shift 00:10:23.633 06:35:34 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:23.633 06:35:34 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:23.633 06:35:34 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:23.633 06:35:34 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:23.633 06:35:34 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:23.633 06:35:34 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:23.633 06:35:34 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:23.633 06:35:34 -- common/autotest_common.sh@1336 -- # break 00:10:23.633 06:35:34 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:23.633 06:35:34 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:10:23.633 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:23.633 fio-3.35 00:10:23.633 Starting 1 thread 00:10:31.773 00:10:31.773 test: (groupid=0, jobs=1): err= 0: pid=76301: Thu Nov 28 06:35:41 2024 00:10:31.773 read: IOPS=25.1k, BW=98.0MiB/s (103MB/s)(196MiB/2001msec) 00:10:31.773 slat (nsec): min=3337, max=81809, avg=4751.46, stdev=1884.33 00:10:31.773 clat (usec): min=243, max=11159, avg=2546.73, stdev=687.57 00:10:31.773 lat (usec): min=247, max=11213, avg=2551.48, stdev=688.72 00:10:31.773 clat percentiles (usec): 00:10:31.773 | 1.00th=[ 1713], 5.00th=[ 2114], 10.00th=[ 2245], 20.00th=[ 2278], 00:10:31.773 | 30.00th=[ 2343], 40.00th=[ 2343], 50.00th=[ 2376], 60.00th=[ 2409], 00:10:31.773 | 70.00th=[ 2442], 80.00th=[ 2540], 90.00th=[ 2835], 95.00th=[ 3982], 00:10:31.773 | 99.00th=[ 5800], 99.50th=[ 6194], 99.90th=[ 6652], 99.95th=[ 8717], 00:10:31.773 | 99.99th=[10814] 00:10:31.773 bw ( KiB/s): min=99304, max=100656, per=99.45%, avg=99824.00, stdev=728.00, samples=3 00:10:31.773 iops : min=24828, max=25162, avg=24956.00, stdev=180.14, samples=3 00:10:31.773 write: IOPS=24.9k, BW=97.4MiB/s (102MB/s)(195MiB/2001msec); 0 zone resets 00:10:31.773 slat (nsec): min=3485, max=63754, avg=5019.02, stdev=1915.11 00:10:31.773 clat (usec): min=212, max=11049, avg=2551.92, stdev=690.20 00:10:31.773 lat (usec): min=217, max=11062, avg=2556.94, stdev=691.35 00:10:31.773 clat percentiles (usec): 00:10:31.773 | 1.00th=[ 1680], 5.00th=[ 2114], 10.00th=[ 2245], 20.00th=[ 2311], 00:10:31.773 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2376], 60.00th=[ 2409], 00:10:31.773 | 70.00th=[ 2474], 80.00th=[ 2540], 90.00th=[ 2835], 95.00th=[ 4015], 00:10:31.773 | 99.00th=[ 5735], 99.50th=[ 6128], 99.90th=[ 6783], 99.95th=[ 9372], 00:10:31.773 | 99.99th=[10814] 00:10:31.773 bw ( KiB/s): min=99144, max=101008, per=100.00%, avg=99848.00, stdev=1012.21, samples=3 00:10:31.773 iops : min=24786, max=25252, avg=24962.00, stdev=253.05, samples=3 00:10:31.773 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03% 00:10:31.773 lat (msec) : 2=2.92%, 4=91.99%, 10=4.98%, 20=0.04% 00:10:31.773 cpu : usr=99.30%, sys=0.05%, ctx=3, majf=0, minf=626 00:10:31.773 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:31.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:31.773 issued rwts: total=50213,49911,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.773 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:31.773 00:10:31.773 Run status group 0 (all jobs): 00:10:31.773 READ: bw=98.0MiB/s (103MB/s), 98.0MiB/s-98.0MiB/s (103MB/s-103MB/s), io=196MiB (206MB), run=2001-2001msec 00:10:31.773 WRITE: bw=97.4MiB/s (102MB/s), 97.4MiB/s-97.4MiB/s (102MB/s-102MB/s), io=195MiB (204MB), run=2001-2001msec 00:10:31.773 ----------------------------------------------------- 00:10:31.773 Suppressions used: 00:10:31.773 count bytes template 00:10:31.773 1 32 /usr/src/fio/parse.c 00:10:31.773 1 8 libtcmalloc_minimal.so 00:10:31.773 ----------------------------------------------------- 00:10:31.773 00:10:31.773 06:35:42 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:31.773 06:35:42 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:31.773 06:35:42 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:10:31.773 06:35:42 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:31.773 06:35:42 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:31.773 06:35:42 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:10:31.773 06:35:42 -- nvme/nvme.sh@41 -- # bs=4096 00:10:31.773 06:35:42 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:10:31.773 06:35:42 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:10:31.773 06:35:42 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:31.773 06:35:42 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:31.773 06:35:42 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:31.773 06:35:42 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:31.773 06:35:42 -- common/autotest_common.sh@1330 -- # shift 00:10:31.773 06:35:42 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:31.773 06:35:42 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:31.773 06:35:42 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:31.773 06:35:42 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:31.773 06:35:42 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:31.773 06:35:42 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:31.773 06:35:42 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:31.773 06:35:42 -- common/autotest_common.sh@1336 -- # break 00:10:31.773 06:35:42 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:31.773 06:35:42 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:10:32.062 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:32.062 fio-3.35 00:10:32.062 Starting 1 thread 00:10:40.226 00:10:40.226 test: (groupid=0, jobs=1): err= 0: pid=76393: Thu Nov 28 06:35:49 2024 00:10:40.226 read: IOPS=22.9k, BW=89.5MiB/s (93.9MB/s)(179MiB/2001msec) 00:10:40.226 slat (nsec): min=3314, max=72154, avg=5184.52, stdev=2497.18 00:10:40.226 clat (usec): min=215, max=9671, avg=2791.24, stdev=967.93 00:10:40.226 lat (usec): min=220, max=9725, avg=2796.43, stdev=969.51 00:10:40.226 clat percentiles (usec): 00:10:40.226 | 1.00th=[ 1745], 5.00th=[ 2212], 10.00th=[ 2278], 20.00th=[ 2343], 00:10:40.226 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2507], 00:10:40.226 | 70.00th=[ 2573], 80.00th=[ 2802], 90.00th=[ 4047], 95.00th=[ 5342], 00:10:40.226 | 99.00th=[ 6587], 99.50th=[ 7046], 99.90th=[ 7635], 99.95th=[ 8291], 00:10:40.226 | 99.99th=[ 9241] 00:10:40.226 bw ( KiB/s): min=84952, max=97192, per=99.10%, avg=90853.33, stdev=6131.71, samples=3 00:10:40.226 iops : min=21238, max=24298, avg=22713.33, stdev=1532.93, samples=3 00:10:40.226 write: IOPS=22.8k, BW=89.0MiB/s (93.3MB/s)(178MiB/2001msec); 0 zone resets 00:10:40.226 slat (nsec): min=3442, max=84828, avg=5423.33, stdev=2505.61 00:10:40.226 clat (usec): min=201, max=9292, avg=2790.66, stdev=959.86 00:10:40.226 lat (usec): min=206, max=9305, avg=2796.08, stdev=961.42 00:10:40.226 clat percentiles (usec): 00:10:40.226 | 1.00th=[ 1778], 5.00th=[ 2212], 10.00th=[ 2278], 20.00th=[ 2343], 00:10:40.226 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2507], 00:10:40.226 | 70.00th=[ 2606], 80.00th=[ 2802], 90.00th=[ 4015], 95.00th=[ 5276], 00:10:40.226 | 99.00th=[ 6521], 99.50th=[ 6980], 99.90th=[ 7701], 99.95th=[ 8455], 00:10:40.226 | 99.99th=[ 9110] 00:10:40.226 bw ( KiB/s): min=84784, max=96240, per=99.89%, avg=91032.00, stdev=5798.38, samples=3 00:10:40.226 iops : min=21196, max=24060, avg=22758.00, stdev=1449.59, samples=3 00:10:40.226 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:10:40.226 lat (msec) : 2=2.10%, 4=87.70%, 10=10.15% 00:10:40.226 cpu : usr=99.15%, sys=0.15%, ctx=4, majf=0, minf=627 00:10:40.226 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:40.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.226 issued rwts: total=45863,45590,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.226 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:40.226 00:10:40.226 Run status group 0 (all jobs): 00:10:40.226 READ: bw=89.5MiB/s (93.9MB/s), 89.5MiB/s-89.5MiB/s (93.9MB/s-93.9MB/s), io=179MiB (188MB), run=2001-2001msec 00:10:40.226 WRITE: bw=89.0MiB/s (93.3MB/s), 89.0MiB/s-89.0MiB/s (93.3MB/s-93.3MB/s), io=178MiB (187MB), run=2001-2001msec 00:10:40.226 ----------------------------------------------------- 00:10:40.226 Suppressions used: 00:10:40.226 count bytes template 00:10:40.226 1 32 /usr/src/fio/parse.c 00:10:40.226 1 8 libtcmalloc_minimal.so 00:10:40.226 ----------------------------------------------------- 00:10:40.226 00:10:40.226 06:35:50 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:40.226 06:35:50 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:40.226 06:35:50 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:10:40.226 06:35:50 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:40.226 06:35:50 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:10:40.226 06:35:50 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:40.226 06:35:50 -- nvme/nvme.sh@41 -- # bs=4096 00:10:40.226 06:35:50 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:10:40.226 06:35:50 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:10:40.226 06:35:50 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:40.226 06:35:50 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:40.226 06:35:50 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:40.226 06:35:50 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:40.226 06:35:50 -- common/autotest_common.sh@1330 -- # shift 00:10:40.226 06:35:50 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:40.226 06:35:50 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:40.226 06:35:50 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:40.226 06:35:50 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:40.226 06:35:50 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:40.226 06:35:50 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:40.227 06:35:50 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:40.227 06:35:50 -- common/autotest_common.sh@1336 -- # break 00:10:40.227 06:35:50 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:40.227 06:35:50 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:10:40.227 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:40.227 fio-3.35 00:10:40.227 Starting 1 thread 00:10:46.794 00:10:46.794 test: (groupid=0, jobs=1): err= 0: pid=76503: Thu Nov 28 06:35:57 2024 00:10:46.794 read: IOPS=24.6k, BW=96.2MiB/s (101MB/s)(193MiB/2001msec) 00:10:46.794 slat (nsec): min=3320, max=80500, avg=4823.28, stdev=2020.15 00:10:46.794 clat (usec): min=254, max=10760, avg=2598.12, stdev=753.09 00:10:46.794 lat (usec): min=259, max=10811, avg=2602.95, stdev=754.32 00:10:46.794 clat percentiles (usec): 00:10:46.794 | 1.00th=[ 1582], 5.00th=[ 2057], 10.00th=[ 2245], 20.00th=[ 2311], 00:10:46.794 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2442], 00:10:46.794 | 70.00th=[ 2474], 80.00th=[ 2573], 90.00th=[ 3261], 95.00th=[ 4146], 00:10:46.794 | 99.00th=[ 6194], 99.50th=[ 6325], 99.90th=[ 6587], 99.95th=[ 8455], 00:10:46.794 | 99.99th=[10683] 00:10:46.794 bw ( KiB/s): min=96686, max=100536, per=100.00%, avg=99204.67, stdev=2182.42, samples=3 00:10:46.794 iops : min=24171, max=25134, avg=24801.00, stdev=545.89, samples=3 00:10:46.794 write: IOPS=24.5k, BW=95.6MiB/s (100MB/s)(191MiB/2001msec); 0 zone resets 00:10:46.794 slat (nsec): min=3438, max=64760, avg=5093.05, stdev=1978.98 00:10:46.794 clat (usec): min=238, max=10693, avg=2597.41, stdev=750.38 00:10:46.794 lat (usec): min=243, max=10706, avg=2602.51, stdev=751.59 00:10:46.794 clat percentiles (usec): 00:10:46.794 | 1.00th=[ 1582], 5.00th=[ 2057], 10.00th=[ 2245], 20.00th=[ 2311], 00:10:46.794 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2442], 00:10:46.794 | 70.00th=[ 2474], 80.00th=[ 2573], 90.00th=[ 3261], 95.00th=[ 4113], 00:10:46.794 | 99.00th=[ 6194], 99.50th=[ 6259], 99.90th=[ 6718], 99.95th=[ 8848], 00:10:46.794 | 99.99th=[10552] 00:10:46.794 bw ( KiB/s): min=96391, max=101368, per=100.00%, avg=99298.33, stdev=2592.08, samples=3 00:10:46.794 iops : min=24097, max=25342, avg=24824.33, stdev=648.44, samples=3 00:10:46.794 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.08% 00:10:46.794 lat (msec) : 2=3.89%, 4=90.07%, 10=5.89%, 20=0.03% 00:10:46.794 cpu : usr=99.30%, sys=0.00%, ctx=4, majf=0, minf=625 00:10:46.794 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:46.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.794 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:46.794 issued rwts: total=49280,48972,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.794 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:46.794 00:10:46.794 Run status group 0 (all jobs): 00:10:46.794 READ: bw=96.2MiB/s (101MB/s), 96.2MiB/s-96.2MiB/s (101MB/s-101MB/s), io=193MiB (202MB), run=2001-2001msec 00:10:46.794 WRITE: bw=95.6MiB/s (100MB/s), 95.6MiB/s-95.6MiB/s (100MB/s-100MB/s), io=191MiB (201MB), run=2001-2001msec 00:10:46.794 ----------------------------------------------------- 00:10:46.794 Suppressions used: 00:10:46.794 count bytes template 00:10:46.794 1 32 /usr/src/fio/parse.c 00:10:46.794 1 8 libtcmalloc_minimal.so 00:10:46.794 ----------------------------------------------------- 00:10:46.794 00:10:46.794 06:35:57 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:46.794 06:35:57 -- nvme/nvme.sh@46 -- # true 00:10:46.794 00:10:46.794 real 0m30.803s 00:10:46.794 user 0m16.615s 00:10:46.794 sys 0m27.366s 00:10:46.794 06:35:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:46.794 ************************************ 00:10:46.794 END TEST nvme_fio 00:10:46.794 ************************************ 00:10:46.794 06:35:57 -- common/autotest_common.sh@10 -- # set +x 00:10:46.794 00:10:46.794 real 1m42.429s 00:10:46.794 user 3m32.565s 00:10:46.794 sys 0m37.396s 00:10:46.794 06:35:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:46.794 06:35:57 -- common/autotest_common.sh@10 -- # set +x 00:10:46.794 ************************************ 00:10:46.794 END TEST nvme 00:10:46.794 ************************************ 00:10:46.794 06:35:57 -- spdk/autotest.sh@210 -- # [[ 0 -eq 1 ]] 00:10:46.794 06:35:57 -- spdk/autotest.sh@214 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:46.794 06:35:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:46.794 06:35:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:46.794 06:35:57 -- common/autotest_common.sh@10 -- # set +x 00:10:46.794 ************************************ 00:10:46.794 START TEST nvme_scc 00:10:46.794 ************************************ 00:10:46.794 06:35:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:46.794 * Looking for test storage... 00:10:46.794 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:46.794 06:35:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:46.794 06:35:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:46.794 06:35:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:46.794 06:35:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:46.794 06:35:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:46.794 06:35:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:46.794 06:35:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:46.794 06:35:57 -- scripts/common.sh@335 -- # IFS=.-: 00:10:46.794 06:35:57 -- scripts/common.sh@335 -- # read -ra ver1 00:10:46.794 06:35:57 -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.794 06:35:57 -- scripts/common.sh@336 -- # read -ra ver2 00:10:46.794 06:35:57 -- scripts/common.sh@337 -- # local 'op=<' 00:10:46.794 06:35:57 -- scripts/common.sh@339 -- # ver1_l=2 00:10:46.794 06:35:57 -- scripts/common.sh@340 -- # ver2_l=1 00:10:46.794 06:35:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:46.794 06:35:57 -- scripts/common.sh@343 -- # case "$op" in 00:10:46.794 06:35:57 -- scripts/common.sh@344 -- # : 1 00:10:46.794 06:35:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:46.794 06:35:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.794 06:35:57 -- scripts/common.sh@364 -- # decimal 1 00:10:46.794 06:35:57 -- scripts/common.sh@352 -- # local d=1 00:10:46.794 06:35:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.794 06:35:57 -- scripts/common.sh@354 -- # echo 1 00:10:46.794 06:35:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:46.794 06:35:57 -- scripts/common.sh@365 -- # decimal 2 00:10:46.794 06:35:57 -- scripts/common.sh@352 -- # local d=2 00:10:46.794 06:35:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.794 06:35:57 -- scripts/common.sh@354 -- # echo 2 00:10:46.794 06:35:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:46.794 06:35:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:46.794 06:35:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:46.794 06:35:57 -- scripts/common.sh@367 -- # return 0 00:10:46.794 06:35:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.794 06:35:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:46.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.794 --rc genhtml_branch_coverage=1 00:10:46.794 --rc genhtml_function_coverage=1 00:10:46.794 --rc genhtml_legend=1 00:10:46.794 --rc geninfo_all_blocks=1 00:10:46.794 --rc geninfo_unexecuted_blocks=1 00:10:46.794 00:10:46.794 ' 00:10:46.794 06:35:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:46.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.794 --rc genhtml_branch_coverage=1 00:10:46.794 --rc genhtml_function_coverage=1 00:10:46.794 --rc genhtml_legend=1 00:10:46.794 --rc geninfo_all_blocks=1 00:10:46.794 --rc geninfo_unexecuted_blocks=1 00:10:46.794 00:10:46.794 ' 00:10:46.794 06:35:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:46.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.794 --rc genhtml_branch_coverage=1 00:10:46.794 --rc genhtml_function_coverage=1 00:10:46.794 --rc genhtml_legend=1 00:10:46.794 --rc geninfo_all_blocks=1 00:10:46.794 --rc geninfo_unexecuted_blocks=1 00:10:46.794 00:10:46.794 ' 00:10:46.794 06:35:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:46.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.794 --rc genhtml_branch_coverage=1 00:10:46.794 --rc genhtml_function_coverage=1 00:10:46.794 --rc genhtml_legend=1 00:10:46.794 --rc geninfo_all_blocks=1 00:10:46.794 --rc geninfo_unexecuted_blocks=1 00:10:46.794 00:10:46.794 ' 00:10:46.794 06:35:57 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:46.794 06:35:57 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:46.794 06:35:57 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:46.794 06:35:57 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:46.794 06:35:57 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:46.794 06:35:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.794 06:35:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.794 06:35:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.794 06:35:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.794 06:35:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.795 06:35:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.795 06:35:57 -- paths/export.sh@5 -- # export PATH 00:10:46.795 06:35:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.795 06:35:57 -- nvme/functions.sh@10 -- # ctrls=() 00:10:46.795 06:35:57 -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:46.795 06:35:57 -- nvme/functions.sh@11 -- # nvmes=() 00:10:46.795 06:35:57 -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:46.795 06:35:57 -- nvme/functions.sh@12 -- # bdfs=() 00:10:46.795 06:35:57 -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:46.795 06:35:57 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:46.795 06:35:57 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:46.795 06:35:57 -- nvme/functions.sh@14 -- # nvme_name= 00:10:46.795 06:35:57 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:46.795 06:35:57 -- nvme/nvme_scc.sh@12 -- # uname 00:10:46.795 06:35:57 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:46.795 06:35:57 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:46.795 06:35:57 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:47.053 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:47.053 Waiting for block devices as requested 00:10:47.311 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:10:47.311 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:10:47.311 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:10:47.311 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:10:52.609 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:10:52.609 06:36:03 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:52.609 06:36:03 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:52.609 06:36:03 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:52.609 06:36:03 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:52.609 06:36:03 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:10:52.609 06:36:03 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:10:52.609 06:36:03 -- scripts/common.sh@15 -- # local i 00:10:52.609 06:36:03 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:10:52.609 06:36:03 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:52.609 06:36:03 -- scripts/common.sh@24 -- # return 0 00:10:52.609 06:36:03 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:52.609 06:36:03 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:52.609 06:36:03 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:52.609 06:36:03 -- nvme/functions.sh@18 -- # shift 00:10:52.609 06:36:03 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:52.609 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.609 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.609 06:36:03 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:52.609 06:36:03 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:52.609 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.609 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.609 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:52.609 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:52.609 06:36:03 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:52.609 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.609 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.609 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:52.609 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:52.609 06:36:03 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:52.609 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.609 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.609 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:52.609 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:10:52.609 06:36:03 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:10:52.609 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.609 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.609 06:36:03 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:52.609 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:52.609 06:36:03 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:52.609 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.609 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.609 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:52.609 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:52.609 06:36:03 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:52.609 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.609 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.609 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:52.609 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:52.609 06:36:03 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:52.609 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.609 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.609 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:52.609 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:52.609 06:36:03 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:52.609 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.609 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.609 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.610 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.610 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.610 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.610 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.610 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.610 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.610 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.610 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.610 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.610 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.610 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.610 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.610 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.610 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.610 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.610 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.610 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.610 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.610 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.610 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.610 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.610 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.610 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.610 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.610 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.610 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.610 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.610 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.610 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.610 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.610 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.610 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.610 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:52.610 06:36:03 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.610 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:52.611 06:36:03 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.611 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.611 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.612 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.612 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.612 06:36:03 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.612 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.612 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.612 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.612 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.612 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.612 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.612 06:36:03 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.612 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.612 06:36:03 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.612 06:36:03 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:52.612 06:36:03 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:52.612 06:36:03 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:52.612 06:36:03 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:10:52.612 06:36:03 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:52.612 06:36:03 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:52.612 06:36:03 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:52.612 06:36:03 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:10:52.612 06:36:03 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:10:52.612 06:36:03 -- scripts/common.sh@15 -- # local i 00:10:52.612 06:36:03 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:10:52.612 06:36:03 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:52.612 06:36:03 -- scripts/common.sh@24 -- # return 0 00:10:52.612 06:36:03 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:52.612 06:36:03 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:52.612 06:36:03 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:52.612 06:36:03 -- nvme/functions.sh@18 -- # shift 00:10:52.612 06:36:03 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.612 06:36:03 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:52.612 06:36:03 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.612 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.612 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.612 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.612 06:36:03 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.612 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.612 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.612 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.612 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.612 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.612 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.612 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.612 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.612 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.612 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.612 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:52.612 06:36:03 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:52.612 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.613 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:52.613 06:36:03 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.613 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.614 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.614 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.614 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.614 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.614 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.614 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.614 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.614 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.614 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.614 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.614 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.614 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.614 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.614 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.614 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.614 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.614 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.614 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.614 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.614 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.614 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.614 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.614 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.614 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.614 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.614 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.614 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.614 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:52.614 06:36:03 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.614 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:52.615 06:36:03 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:52.615 06:36:03 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:52.615 06:36:03 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:52.615 06:36:03 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:52.615 06:36:03 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@18 -- # shift 00:10:52.615 06:36:03 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:52.615 06:36:03 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.615 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.615 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.615 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.616 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.616 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.616 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.616 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.616 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.616 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.616 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.616 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.616 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.616 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.616 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.616 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.616 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.616 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.616 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.616 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.616 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.616 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.616 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.616 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.616 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.616 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.616 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.616 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.616 06:36:03 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:52.616 06:36:03 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:52.616 06:36:03 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:10:52.616 06:36:03 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:10:52.616 06:36:03 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:10:52.616 06:36:03 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:10:52.616 06:36:03 -- nvme/functions.sh@18 -- # shift 00:10:52.616 06:36:03 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.616 06:36:03 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:10:52.616 06:36:03 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.616 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.616 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.616 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.616 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.616 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:10:52.616 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.616 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.617 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.617 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.617 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.617 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.617 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.617 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.617 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.617 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.617 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.617 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.617 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.617 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.617 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.617 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.617 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.617 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.617 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.617 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.617 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.617 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.617 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.617 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.617 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.617 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.617 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.617 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.617 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.617 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.617 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.617 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.617 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.617 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.617 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:52.617 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.617 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:10:52.618 06:36:03 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:52.618 06:36:03 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:10:52.618 06:36:03 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:10:52.618 06:36:03 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:10:52.618 06:36:03 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@18 -- # shift 00:10:52.618 06:36:03 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:10:52.618 06:36:03 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.618 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.618 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.618 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.619 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.619 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.619 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.619 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.619 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.619 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.619 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.619 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.619 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.619 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.619 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.619 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.619 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.619 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.619 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.619 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.619 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.619 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.619 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.619 06:36:03 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:10:52.619 06:36:03 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:52.619 06:36:03 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:52.619 06:36:03 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:10:52.619 06:36:03 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:52.619 06:36:03 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:52.619 06:36:03 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:52.619 06:36:03 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:10:52.619 06:36:03 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:10:52.619 06:36:03 -- scripts/common.sh@15 -- # local i 00:10:52.619 06:36:03 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:10:52.619 06:36:03 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:52.619 06:36:03 -- scripts/common.sh@24 -- # return 0 00:10:52.619 06:36:03 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:52.619 06:36:03 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:52.619 06:36:03 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:52.619 06:36:03 -- nvme/functions.sh@18 -- # shift 00:10:52.619 06:36:03 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.619 06:36:03 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:52.619 06:36:03 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.619 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.619 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.619 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:10:52.619 06:36:03 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:10:52.619 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.620 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.620 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:52.620 06:36:03 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:52.621 06:36:03 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.621 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.621 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.622 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.622 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.622 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.622 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.622 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.622 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.622 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.622 06:36:03 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.622 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.622 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.622 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.622 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.622 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.622 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.622 06:36:03 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.622 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.622 06:36:03 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.622 06:36:03 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:52.622 06:36:03 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:52.622 06:36:03 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:52.622 06:36:03 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:52.622 06:36:03 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:52.622 06:36:03 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:52.622 06:36:03 -- nvme/functions.sh@18 -- # shift 00:10:52.622 06:36:03 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.622 06:36:03 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:52.622 06:36:03 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.622 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.622 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.622 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.622 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.622 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.622 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.622 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.622 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.622 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.622 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.622 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.622 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.622 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.622 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:52.622 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.623 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.623 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.623 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.623 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.623 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.623 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.623 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.623 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.623 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.623 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.623 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.623 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.623 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.623 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.623 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.623 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.623 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.623 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.623 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.623 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.623 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.623 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.623 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.623 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.623 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.623 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.623 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.623 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.623 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.623 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.623 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:52.623 06:36:03 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.623 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.623 06:36:03 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:52.623 06:36:03 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:52.623 06:36:03 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:52.624 06:36:03 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:10:52.624 06:36:03 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:52.624 06:36:03 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:52.624 06:36:03 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:52.624 06:36:03 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:10:52.624 06:36:03 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:10:52.624 06:36:03 -- scripts/common.sh@15 -- # local i 00:10:52.624 06:36:03 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:10:52.624 06:36:03 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:52.624 06:36:03 -- scripts/common.sh@24 -- # return 0 00:10:52.624 06:36:03 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:52.624 06:36:03 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:52.624 06:36:03 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:52.624 06:36:03 -- nvme/functions.sh@18 -- # shift 00:10:52.624 06:36:03 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.624 06:36:03 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:52.624 06:36:03 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.624 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.624 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.624 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.624 06:36:03 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.624 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.624 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.624 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.624 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.624 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.624 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.624 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.624 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.624 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.624 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.624 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.624 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.624 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.624 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.624 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.624 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.624 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.624 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.624 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.624 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.624 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.624 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:52.624 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.625 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.625 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.625 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.625 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.625 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.625 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.625 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.625 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.625 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.625 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.625 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.625 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.625 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.625 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.625 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.625 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.625 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.625 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.625 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.625 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.625 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.625 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.625 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.625 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.625 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.625 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.625 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.625 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.625 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.625 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.625 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.625 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.625 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:52.625 06:36:03 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:52.625 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.626 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.626 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.626 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.626 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.626 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.626 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.626 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.626 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.626 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.626 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.626 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.626 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.626 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.626 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.626 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.626 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.626 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.626 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.626 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.626 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.626 06:36:03 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.626 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.626 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.626 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.626 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.626 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:52.626 06:36:03 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:52.886 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.886 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.886 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.886 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:52.886 06:36:03 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:52.886 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.886 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.886 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.886 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:52.886 06:36:03 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:52.886 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.886 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.886 06:36:03 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:52.886 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:52.886 06:36:03 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:52.886 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.886 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.886 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:52.886 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:52.886 06:36:03 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:52.886 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.886 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.886 06:36:03 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:52.886 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:52.886 06:36:03 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:52.886 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.886 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.886 06:36:03 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:52.886 06:36:03 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:52.886 06:36:03 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:10:52.886 06:36:03 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:10:52.886 06:36:03 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:10:52.886 06:36:03 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:10:52.886 06:36:03 -- nvme/functions.sh@18 -- # shift 00:10:52.886 06:36:03 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:10:52.886 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.886 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.886 06:36:03 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:10:52.886 06:36:03 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:52.886 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.886 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.886 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:52.886 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:10:52.886 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:10:52.886 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.886 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.886 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:52.886 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:10:52.886 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:10:52.886 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.886 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.886 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:52.886 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:10:52.886 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:10:52.886 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.886 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.886 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:52.886 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:10:52.886 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.887 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.887 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.887 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.887 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.887 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.887 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.887 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.887 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.887 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.887 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.887 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.887 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.887 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.887 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.887 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.887 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.887 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.887 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.887 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.887 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.887 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.887 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.887 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.887 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.887 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.887 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.887 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.887 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.887 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.887 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.887 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.887 06:36:03 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.887 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:52.887 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:52.887 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.888 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.888 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:52.888 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:52.888 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:52.888 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.888 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.888 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:52.888 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:52.888 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:52.888 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.888 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.888 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:52.888 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:52.888 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:52.888 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.888 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.888 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:52.888 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:52.888 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:52.888 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.888 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.888 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:52.888 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:52.888 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:52.888 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.888 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.888 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:52.888 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:52.888 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:52.888 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.888 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.888 06:36:03 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:52.888 06:36:03 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:52.888 06:36:03 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:52.888 06:36:03 -- nvme/functions.sh@21 -- # IFS=: 00:10:52.888 06:36:03 -- nvme/functions.sh@21 -- # read -r reg val 00:10:52.888 06:36:03 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:10:52.888 06:36:03 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:52.888 06:36:03 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:52.888 06:36:03 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:10:52.888 06:36:03 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:52.888 06:36:03 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:52.888 06:36:03 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:52.888 06:36:03 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:10:52.888 06:36:03 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:52.888 06:36:03 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:10:52.888 06:36:03 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:10:52.888 06:36:03 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:10:52.888 06:36:03 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:10:52.888 06:36:03 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:10:52.888 06:36:03 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:52.888 06:36:03 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:10:52.888 06:36:03 -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:10:52.888 06:36:03 -- nvme/functions.sh@184 -- # get_oncs nvme1 00:10:52.888 06:36:03 -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:10:52.888 06:36:03 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:52.888 06:36:03 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:52.888 06:36:03 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:52.888 06:36:03 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:52.888 06:36:03 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:52.888 06:36:03 -- nvme/functions.sh@76 -- # echo 0x15d 00:10:52.888 06:36:03 -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:52.888 06:36:03 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:52.888 06:36:03 -- nvme/functions.sh@197 -- # echo nvme1 00:10:52.888 06:36:03 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:52.888 06:36:03 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:10:52.888 06:36:03 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:10:52.888 06:36:03 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:10:52.888 06:36:03 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:10:52.888 06:36:03 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:52.888 06:36:03 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:52.888 06:36:03 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:52.888 06:36:03 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:52.888 06:36:03 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:52.888 06:36:03 -- nvme/functions.sh@76 -- # echo 0x15d 00:10:52.888 06:36:03 -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:52.888 06:36:03 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:52.888 06:36:03 -- nvme/functions.sh@197 -- # echo nvme0 00:10:52.888 06:36:03 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:52.888 06:36:03 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:10:52.888 06:36:03 -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:10:52.888 06:36:03 -- nvme/functions.sh@184 -- # get_oncs nvme3 00:10:52.888 06:36:03 -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:10:52.888 06:36:03 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:52.888 06:36:03 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:52.888 06:36:03 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:52.888 06:36:03 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:52.888 06:36:03 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:52.888 06:36:03 -- nvme/functions.sh@76 -- # echo 0x15d 00:10:52.888 06:36:03 -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:52.888 06:36:03 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:52.888 06:36:03 -- nvme/functions.sh@197 -- # echo nvme3 00:10:52.888 06:36:03 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:52.888 06:36:03 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:10:52.888 06:36:03 -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:10:52.888 06:36:03 -- nvme/functions.sh@184 -- # get_oncs nvme2 00:10:52.888 06:36:03 -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:10:52.888 06:36:03 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:52.888 06:36:03 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:52.888 06:36:03 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:52.888 06:36:03 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:52.888 06:36:03 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:52.888 06:36:03 -- nvme/functions.sh@76 -- # echo 0x15d 00:10:52.888 06:36:03 -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:52.888 06:36:03 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:52.888 06:36:03 -- nvme/functions.sh@197 -- # echo nvme2 00:10:52.888 06:36:03 -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:10:52.888 06:36:03 -- nvme/functions.sh@206 -- # echo nvme1 00:10:52.888 06:36:03 -- nvme/functions.sh@207 -- # return 0 00:10:52.888 06:36:03 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:52.888 06:36:03 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:08.0 00:10:52.888 06:36:03 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:53.457 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:53.717 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:10:53.717 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:10:53.717 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:10:53.717 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:10:53.717 06:36:04 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:10:53.717 06:36:04 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:53.717 06:36:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:53.717 06:36:04 -- common/autotest_common.sh@10 -- # set +x 00:10:53.717 ************************************ 00:10:53.717 START TEST nvme_simple_copy 00:10:53.717 ************************************ 00:10:53.717 06:36:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:10:53.978 Initializing NVMe Controllers 00:10:53.978 Attaching to 0000:00:08.0 00:10:53.978 Controller supports SCC. Attached to 0000:00:08.0 00:10:53.978 Namespace ID: 1 size: 4GB 00:10:53.978 Initialization complete. 00:10:53.978 00:10:53.978 Controller QEMU NVMe Ctrl (12342 ) 00:10:53.978 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:53.978 Namespace Block Size:4096 00:10:53.978 Writing LBAs 0 to 63 with Random Data 00:10:53.978 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:53.978 LBAs matching Written Data: 64 00:10:53.978 00:10:53.978 real 0m0.224s 00:10:53.978 user 0m0.082s 00:10:53.978 sys 0m0.041s 00:10:53.978 06:36:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:53.978 06:36:04 -- common/autotest_common.sh@10 -- # set +x 00:10:53.978 ************************************ 00:10:53.978 END TEST nvme_simple_copy 00:10:53.978 ************************************ 00:10:53.978 00:10:53.978 real 0m7.423s 00:10:53.978 user 0m0.997s 00:10:53.978 sys 0m1.342s 00:10:53.978 06:36:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:53.978 06:36:04 -- common/autotest_common.sh@10 -- # set +x 00:10:53.978 ************************************ 00:10:53.978 END TEST nvme_scc 00:10:53.978 ************************************ 00:10:53.978 06:36:04 -- spdk/autotest.sh@216 -- # [[ 0 -eq 1 ]] 00:10:53.978 06:36:04 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:53.978 06:36:04 -- spdk/autotest.sh@222 -- # [[ '' -eq 1 ]] 00:10:53.978 06:36:04 -- spdk/autotest.sh@225 -- # [[ 1 -eq 1 ]] 00:10:53.978 06:36:04 -- spdk/autotest.sh@226 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:53.978 06:36:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:53.978 06:36:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:53.978 06:36:04 -- common/autotest_common.sh@10 -- # set +x 00:10:53.978 ************************************ 00:10:53.978 START TEST nvme_fdp 00:10:53.978 ************************************ 00:10:53.978 06:36:04 -- common/autotest_common.sh@1114 -- # test/nvme/nvme_fdp.sh 00:10:54.239 * Looking for test storage... 00:10:54.239 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:54.239 06:36:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:54.239 06:36:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:54.239 06:36:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:54.239 06:36:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:54.239 06:36:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:54.239 06:36:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:54.239 06:36:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:54.239 06:36:04 -- scripts/common.sh@335 -- # IFS=.-: 00:10:54.239 06:36:04 -- scripts/common.sh@335 -- # read -ra ver1 00:10:54.239 06:36:04 -- scripts/common.sh@336 -- # IFS=.-: 00:10:54.239 06:36:04 -- scripts/common.sh@336 -- # read -ra ver2 00:10:54.239 06:36:04 -- scripts/common.sh@337 -- # local 'op=<' 00:10:54.239 06:36:04 -- scripts/common.sh@339 -- # ver1_l=2 00:10:54.239 06:36:04 -- scripts/common.sh@340 -- # ver2_l=1 00:10:54.239 06:36:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:54.239 06:36:04 -- scripts/common.sh@343 -- # case "$op" in 00:10:54.239 06:36:04 -- scripts/common.sh@344 -- # : 1 00:10:54.239 06:36:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:54.239 06:36:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:54.239 06:36:04 -- scripts/common.sh@364 -- # decimal 1 00:10:54.239 06:36:04 -- scripts/common.sh@352 -- # local d=1 00:10:54.239 06:36:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:54.239 06:36:04 -- scripts/common.sh@354 -- # echo 1 00:10:54.239 06:36:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:54.239 06:36:04 -- scripts/common.sh@365 -- # decimal 2 00:10:54.239 06:36:04 -- scripts/common.sh@352 -- # local d=2 00:10:54.239 06:36:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:54.239 06:36:04 -- scripts/common.sh@354 -- # echo 2 00:10:54.239 06:36:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:54.239 06:36:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:54.239 06:36:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:54.239 06:36:04 -- scripts/common.sh@367 -- # return 0 00:10:54.239 06:36:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:54.239 06:36:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:54.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.239 --rc genhtml_branch_coverage=1 00:10:54.239 --rc genhtml_function_coverage=1 00:10:54.239 --rc genhtml_legend=1 00:10:54.239 --rc geninfo_all_blocks=1 00:10:54.239 --rc geninfo_unexecuted_blocks=1 00:10:54.239 00:10:54.239 ' 00:10:54.239 06:36:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:54.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.239 --rc genhtml_branch_coverage=1 00:10:54.239 --rc genhtml_function_coverage=1 00:10:54.239 --rc genhtml_legend=1 00:10:54.239 --rc geninfo_all_blocks=1 00:10:54.239 --rc geninfo_unexecuted_blocks=1 00:10:54.239 00:10:54.239 ' 00:10:54.239 06:36:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:54.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.239 --rc genhtml_branch_coverage=1 00:10:54.239 --rc genhtml_function_coverage=1 00:10:54.239 --rc genhtml_legend=1 00:10:54.239 --rc geninfo_all_blocks=1 00:10:54.239 --rc geninfo_unexecuted_blocks=1 00:10:54.239 00:10:54.239 ' 00:10:54.239 06:36:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:54.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.239 --rc genhtml_branch_coverage=1 00:10:54.239 --rc genhtml_function_coverage=1 00:10:54.239 --rc genhtml_legend=1 00:10:54.239 --rc geninfo_all_blocks=1 00:10:54.239 --rc geninfo_unexecuted_blocks=1 00:10:54.239 00:10:54.239 ' 00:10:54.239 06:36:04 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:54.239 06:36:04 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:54.239 06:36:04 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:54.239 06:36:04 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:54.239 06:36:04 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:54.239 06:36:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.239 06:36:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.239 06:36:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.240 06:36:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.240 06:36:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.240 06:36:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.240 06:36:04 -- paths/export.sh@5 -- # export PATH 00:10:54.240 06:36:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.240 06:36:04 -- nvme/functions.sh@10 -- # ctrls=() 00:10:54.240 06:36:04 -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:54.240 06:36:04 -- nvme/functions.sh@11 -- # nvmes=() 00:10:54.240 06:36:04 -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:54.240 06:36:04 -- nvme/functions.sh@12 -- # bdfs=() 00:10:54.240 06:36:04 -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:54.240 06:36:04 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:54.240 06:36:04 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:54.240 06:36:04 -- nvme/functions.sh@14 -- # nvme_name= 00:10:54.240 06:36:04 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:54.240 06:36:04 -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:54.507 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:54.507 Waiting for block devices as requested 00:10:54.770 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:10:54.770 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:10:54.770 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:10:54.770 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:00.060 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:00.060 06:36:10 -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:00.060 06:36:10 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:00.060 06:36:10 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:00.060 06:36:10 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:00.060 06:36:10 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:11:00.060 06:36:10 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:11:00.060 06:36:10 -- scripts/common.sh@15 -- # local i 00:11:00.060 06:36:10 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:11:00.060 06:36:10 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:00.060 06:36:10 -- scripts/common.sh@24 -- # return 0 00:11:00.060 06:36:10 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:00.060 06:36:10 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:00.060 06:36:10 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:00.060 06:36:10 -- nvme/functions.sh@18 -- # shift 00:11:00.060 06:36:10 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:00.060 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.060 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.060 06:36:10 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:00.060 06:36:10 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:00.060 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.060 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.060 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:00.060 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:00.060 06:36:10 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:00.060 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.060 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.060 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:00.060 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:00.060 06:36:10 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:00.060 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.060 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.060 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:00.060 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:11:00.060 06:36:10 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:11:00.060 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.060 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.060 06:36:10 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:00.060 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:00.060 06:36:10 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.061 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.061 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.061 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.062 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:00.062 06:36:10 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:00.062 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.063 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.063 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.063 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.063 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.063 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.063 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.063 06:36:10 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.063 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.063 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.063 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.063 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.063 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.063 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.063 06:36:10 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.063 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.063 06:36:10 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.063 06:36:10 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:00.063 06:36:10 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:00.063 06:36:10 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:00.063 06:36:10 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:11:00.063 06:36:10 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:00.063 06:36:10 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:00.063 06:36:10 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:00.063 06:36:10 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:11:00.063 06:36:10 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:11:00.063 06:36:10 -- scripts/common.sh@15 -- # local i 00:11:00.063 06:36:10 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:11:00.063 06:36:10 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:00.063 06:36:10 -- scripts/common.sh@24 -- # return 0 00:11:00.063 06:36:10 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:00.063 06:36:10 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:00.063 06:36:10 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:00.063 06:36:10 -- nvme/functions.sh@18 -- # shift 00:11:00.063 06:36:10 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.063 06:36:10 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:00.063 06:36:10 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.063 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.063 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.063 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.063 06:36:10 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.063 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.063 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.063 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.063 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.063 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.063 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.063 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.063 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:00.063 06:36:10 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.063 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.064 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.064 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:00.064 06:36:10 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.065 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.065 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.065 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.065 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.065 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.065 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.065 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.065 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.065 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.065 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.065 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.065 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.065 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.065 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.065 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.065 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.065 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.065 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.065 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.065 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.065 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.065 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.065 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.065 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.065 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.065 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.065 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.065 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.065 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.065 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.065 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:00.066 06:36:10 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:00.066 06:36:10 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:00.066 06:36:10 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:00.066 06:36:10 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:00.066 06:36:10 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@18 -- # shift 00:11:00.066 06:36:10 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:00.066 06:36:10 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:00.066 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.066 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.066 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.067 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.067 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.067 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.067 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.067 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.067 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.067 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.067 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.067 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.067 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.067 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.067 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.067 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.067 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.067 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.067 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.067 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.067 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.067 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.067 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.067 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.067 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.067 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.067 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.067 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.067 06:36:10 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:00.067 06:36:10 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:00.067 06:36:10 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:11:00.067 06:36:10 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:11:00.067 06:36:10 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:11:00.067 06:36:10 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:11:00.067 06:36:10 -- nvme/functions.sh@18 -- # shift 00:11:00.067 06:36:10 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.067 06:36:10 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:11:00.067 06:36:10 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.067 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.067 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.067 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.067 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:11:00.067 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.067 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.068 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:00.068 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.068 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:11:00.069 06:36:10 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:00.069 06:36:10 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:11:00.069 06:36:10 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:11:00.069 06:36:10 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:11:00.069 06:36:10 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@18 -- # shift 00:11:00.069 06:36:10 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:11:00.069 06:36:10 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.069 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.069 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:11:00.069 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.070 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.070 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.070 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.070 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.070 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.070 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.070 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.070 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.070 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.070 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.070 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.070 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.070 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.070 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.070 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.070 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.070 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.070 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.070 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.070 06:36:10 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:11:00.070 06:36:10 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:00.070 06:36:10 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:00.070 06:36:10 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:11:00.070 06:36:10 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:00.070 06:36:10 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:00.070 06:36:10 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:00.070 06:36:10 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:11:00.070 06:36:10 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:11:00.070 06:36:10 -- scripts/common.sh@15 -- # local i 00:11:00.070 06:36:10 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:11:00.070 06:36:10 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:00.070 06:36:10 -- scripts/common.sh@24 -- # return 0 00:11:00.070 06:36:10 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:00.070 06:36:10 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:00.070 06:36:10 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:00.070 06:36:10 -- nvme/functions.sh@18 -- # shift 00:11:00.070 06:36:10 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.070 06:36:10 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:00.070 06:36:10 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.070 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.070 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.070 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.070 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.070 06:36:10 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:00.070 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.071 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.071 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:00.071 06:36:10 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.072 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:00.072 06:36:10 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.072 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:00.073 06:36:10 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:00.073 06:36:10 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:00.073 06:36:10 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:00.073 06:36:10 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:00.073 06:36:10 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@18 -- # shift 00:11:00.073 06:36:10 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:00.073 06:36:10 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:00.073 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.073 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.073 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.074 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.074 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.074 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.074 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.074 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.074 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.074 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.074 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.074 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.074 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.074 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.074 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.074 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.074 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.074 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.074 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.074 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.074 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.074 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.074 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.074 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.074 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.074 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.074 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.074 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.074 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.074 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.074 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.074 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:00.074 06:36:10 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.074 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.074 06:36:10 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:00.074 06:36:10 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:00.074 06:36:10 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:00.074 06:36:10 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:11:00.074 06:36:10 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:00.074 06:36:10 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:00.074 06:36:10 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:00.074 06:36:10 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:11:00.074 06:36:10 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:11:00.074 06:36:10 -- scripts/common.sh@15 -- # local i 00:11:00.074 06:36:10 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:11:00.074 06:36:10 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:00.074 06:36:10 -- scripts/common.sh@24 -- # return 0 00:11:00.074 06:36:10 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:00.074 06:36:10 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:00.075 06:36:10 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:00.075 06:36:10 -- nvme/functions.sh@18 -- # shift 00:11:00.075 06:36:10 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.075 06:36:10 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:00.075 06:36:10 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.075 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.075 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.075 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.075 06:36:10 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.075 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.075 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.075 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.075 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.075 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.075 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.075 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.075 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.075 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.075 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.075 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.075 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.075 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.075 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.075 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.075 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.075 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.075 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.075 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.075 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.075 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.075 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.075 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.075 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:00.075 06:36:10 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.075 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.076 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.076 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.076 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:00.077 06:36:10 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:00.077 06:36:10 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:11:00.077 06:36:10 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:11:00.077 06:36:10 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:11:00.077 06:36:10 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@18 -- # shift 00:11:00.077 06:36:10 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:11:00.077 06:36:10 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:11:00.077 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.077 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.077 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 06:36:10 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.078 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:00.078 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.078 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.079 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:00.079 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:00.079 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:00.079 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.079 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.079 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:00.079 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:00.079 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:00.079 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.079 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.079 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:00.079 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:00.079 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:00.079 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.079 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.079 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:00.079 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:00.079 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:00.079 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.079 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.079 06:36:10 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:00.079 06:36:10 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:00.079 06:36:10 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:00.079 06:36:10 -- nvme/functions.sh@21 -- # IFS=: 00:11:00.079 06:36:10 -- nvme/functions.sh@21 -- # read -r reg val 00:11:00.079 06:36:10 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:11:00.079 06:36:10 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:00.079 06:36:10 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:00.079 06:36:10 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:11:00.079 06:36:10 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:00.079 06:36:10 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:00.079 06:36:10 -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:00.079 06:36:10 -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:11:00.079 06:36:10 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:00.079 06:36:10 -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:11:00.079 06:36:10 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:00.079 06:36:10 -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:11:00.079 06:36:10 -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:11:00.079 06:36:10 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:00.079 06:36:10 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:00.079 06:36:10 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:11:00.079 06:36:10 -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:11:00.079 06:36:10 -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:11:00.079 06:36:10 -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:11:00.079 06:36:10 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:00.079 06:36:10 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:00.079 06:36:10 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:00.079 06:36:10 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:00.079 06:36:10 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:00.079 06:36:10 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:00.079 06:36:10 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:00.079 06:36:10 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:00.079 06:36:10 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:00.079 06:36:10 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:11:00.079 06:36:10 -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:11:00.079 06:36:10 -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:11:00.079 06:36:10 -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:11:00.079 06:36:10 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:00.079 06:36:10 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:00.079 06:36:10 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:00.079 06:36:10 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:00.079 06:36:10 -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:00.079 06:36:10 -- nvme/functions.sh@76 -- # echo 0x88010 00:11:00.079 06:36:10 -- nvme/functions.sh@176 -- # ctratt=0x88010 00:11:00.079 06:36:10 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:00.079 06:36:10 -- nvme/functions.sh@197 -- # echo nvme0 00:11:00.079 06:36:10 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:00.079 06:36:10 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:11:00.079 06:36:10 -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:11:00.079 06:36:10 -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:11:00.079 06:36:10 -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:11:00.079 06:36:10 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:00.079 06:36:10 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:00.079 06:36:10 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:00.079 06:36:10 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:00.079 06:36:10 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:00.079 06:36:10 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:00.079 06:36:10 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:00.079 06:36:10 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:00.079 06:36:10 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:00.079 06:36:10 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:11:00.079 06:36:10 -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:11:00.079 06:36:10 -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:11:00.079 06:36:10 -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:11:00.079 06:36:10 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:00.079 06:36:10 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:00.079 06:36:10 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:00.079 06:36:10 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:00.079 06:36:10 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:00.079 06:36:10 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:00.079 06:36:10 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:00.079 06:36:10 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:00.079 06:36:10 -- nvme/functions.sh@204 -- # trap - ERR 00:11:00.079 06:36:10 -- nvme/functions.sh@204 -- # print_backtrace 00:11:00.079 06:36:10 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:11:00.079 06:36:10 -- common/autotest_common.sh@1142 -- # return 0 00:11:00.079 06:36:10 -- nvme/functions.sh@204 -- # trap - ERR 00:11:00.079 06:36:10 -- nvme/functions.sh@204 -- # print_backtrace 00:11:00.079 06:36:10 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:11:00.079 06:36:10 -- common/autotest_common.sh@1142 -- # return 0 00:11:00.079 06:36:10 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:11:00.079 06:36:10 -- nvme/functions.sh@206 -- # echo nvme0 00:11:00.079 06:36:10 -- nvme/functions.sh@207 -- # return 0 00:11:00.079 06:36:10 -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme0 00:11:00.079 06:36:10 -- nvme/nvme_fdp.sh@13 -- # bdf=0000:00:09.0 00:11:00.079 06:36:10 -- nvme/nvme_fdp.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:01.021 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:01.021 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:11:01.021 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:11:01.021 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:11:01.021 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:11:01.021 06:36:11 -- nvme/nvme_fdp.sh@17 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:11:01.021 06:36:11 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:01.021 06:36:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:01.021 06:36:11 -- common/autotest_common.sh@10 -- # set +x 00:11:01.021 ************************************ 00:11:01.021 START TEST nvme_flexible_data_placement 00:11:01.021 ************************************ 00:11:01.021 06:36:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:11:01.283 Initializing NVMe Controllers 00:11:01.283 Attaching to 0000:00:09.0 00:11:01.283 Controller supports FDP Attached to 0000:00:09.0 00:11:01.283 Namespace ID: 1 Endurance Group ID: 1 00:11:01.283 Initialization complete. 00:11:01.283 00:11:01.283 ================================== 00:11:01.283 == FDP tests for Namespace: #01 == 00:11:01.283 ================================== 00:11:01.283 00:11:01.283 Get Feature: FDP: 00:11:01.283 ================= 00:11:01.283 Enabled: Yes 00:11:01.283 FDP configuration Index: 0 00:11:01.283 00:11:01.283 FDP configurations log page 00:11:01.283 =========================== 00:11:01.283 Number of FDP configurations: 1 00:11:01.283 Version: 0 00:11:01.283 Size: 112 00:11:01.283 FDP Configuration Descriptor: 0 00:11:01.283 Descriptor Size: 96 00:11:01.283 Reclaim Group Identifier format: 2 00:11:01.283 FDP Volatile Write Cache: Not Present 00:11:01.283 FDP Configuration: Valid 00:11:01.283 Vendor Specific Size: 0 00:11:01.283 Number of Reclaim Groups: 2 00:11:01.283 Number of Recalim Unit Handles: 8 00:11:01.283 Max Placement Identifiers: 128 00:11:01.283 Number of Namespaces Suppprted: 256 00:11:01.283 Reclaim unit Nominal Size: 6000000 bytes 00:11:01.283 Estimated Reclaim Unit Time Limit: Not Reported 00:11:01.283 RUH Desc #000: RUH Type: Initially Isolated 00:11:01.283 RUH Desc #001: RUH Type: Initially Isolated 00:11:01.283 RUH Desc #002: RUH Type: Initially Isolated 00:11:01.283 RUH Desc #003: RUH Type: Initially Isolated 00:11:01.283 RUH Desc #004: RUH Type: Initially Isolated 00:11:01.283 RUH Desc #005: RUH Type: Initially Isolated 00:11:01.283 RUH Desc #006: RUH Type: Initially Isolated 00:11:01.283 RUH Desc #007: RUH Type: Initially Isolated 00:11:01.283 00:11:01.283 FDP reclaim unit handle usage log page 00:11:01.283 ====================================== 00:11:01.283 Number of Reclaim Unit Handles: 8 00:11:01.283 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:01.283 RUH Usage Desc #001: RUH Attributes: Unused 00:11:01.283 RUH Usage Desc #002: RUH Attributes: Unused 00:11:01.283 RUH Usage Desc #003: RUH Attributes: Unused 00:11:01.283 RUH Usage Desc #004: RUH Attributes: Unused 00:11:01.283 RUH Usage Desc #005: RUH Attributes: Unused 00:11:01.283 RUH Usage Desc #006: RUH Attributes: Unused 00:11:01.283 RUH Usage Desc #007: RUH Attributes: Unused 00:11:01.283 00:11:01.283 FDP statistics log page 00:11:01.283 ======================= 00:11:01.283 Host bytes with metadata written: 1861816320 00:11:01.283 Media bytes with metadata written: 1863135232 00:11:01.283 Media bytes erased: 0 00:11:01.283 00:11:01.283 FDP Reclaim unit handle status 00:11:01.283 ============================== 00:11:01.283 Number of RUHS descriptors: 2 00:11:01.283 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x000000000000306f 00:11:01.284 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:01.284 00:11:01.284 FDP write on placement id: 0 success 00:11:01.284 00:11:01.284 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:01.284 00:11:01.284 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:01.284 00:11:01.284 Get Feature: FDP Events for Placement handle: #0 00:11:01.284 ======================== 00:11:01.284 Number of FDP Events: 6 00:11:01.284 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:01.284 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:01.284 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:01.284 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:01.284 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:01.284 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:01.284 00:11:01.284 FDP events log page 00:11:01.284 =================== 00:11:01.284 Number of FDP events: 1 00:11:01.284 FDP Event #0: 00:11:01.284 Event Type: RU Not Written to Capacity 00:11:01.284 Placement Identifier: Valid 00:11:01.284 NSID: Valid 00:11:01.284 Location: Valid 00:11:01.284 Placement Identifier: 0 00:11:01.284 Event Timestamp: 3 00:11:01.284 Namespace Identifier: 1 00:11:01.284 Reclaim Group Identifier: 0 00:11:01.284 Reclaim Unit Handle Identifier: 0 00:11:01.284 00:11:01.284 FDP test passed 00:11:01.284 00:11:01.284 real 0m0.188s 00:11:01.284 user 0m0.043s 00:11:01.284 sys 0m0.044s 00:11:01.284 06:36:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:01.284 06:36:11 -- common/autotest_common.sh@10 -- # set +x 00:11:01.284 ************************************ 00:11:01.284 END TEST nvme_flexible_data_placement 00:11:01.284 ************************************ 00:11:01.284 00:11:01.284 real 0m7.277s 00:11:01.284 user 0m0.933s 00:11:01.284 sys 0m1.350s 00:11:01.284 06:36:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:01.284 06:36:11 -- common/autotest_common.sh@10 -- # set +x 00:11:01.284 ************************************ 00:11:01.284 END TEST nvme_fdp 00:11:01.284 ************************************ 00:11:01.284 06:36:12 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:11:01.284 06:36:12 -- spdk/autotest.sh@233 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:01.284 06:36:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:01.284 06:36:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:01.284 06:36:12 -- common/autotest_common.sh@10 -- # set +x 00:11:01.284 ************************************ 00:11:01.284 START TEST nvme_rpc 00:11:01.284 ************************************ 00:11:01.284 06:36:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:01.546 * Looking for test storage... 00:11:01.546 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:01.546 06:36:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:01.546 06:36:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:01.546 06:36:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:01.546 06:36:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:01.546 06:36:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:01.546 06:36:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:01.546 06:36:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:01.546 06:36:12 -- scripts/common.sh@335 -- # IFS=.-: 00:11:01.546 06:36:12 -- scripts/common.sh@335 -- # read -ra ver1 00:11:01.546 06:36:12 -- scripts/common.sh@336 -- # IFS=.-: 00:11:01.546 06:36:12 -- scripts/common.sh@336 -- # read -ra ver2 00:11:01.546 06:36:12 -- scripts/common.sh@337 -- # local 'op=<' 00:11:01.546 06:36:12 -- scripts/common.sh@339 -- # ver1_l=2 00:11:01.546 06:36:12 -- scripts/common.sh@340 -- # ver2_l=1 00:11:01.546 06:36:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:01.546 06:36:12 -- scripts/common.sh@343 -- # case "$op" in 00:11:01.546 06:36:12 -- scripts/common.sh@344 -- # : 1 00:11:01.546 06:36:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:01.546 06:36:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:01.546 06:36:12 -- scripts/common.sh@364 -- # decimal 1 00:11:01.546 06:36:12 -- scripts/common.sh@352 -- # local d=1 00:11:01.546 06:36:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:01.546 06:36:12 -- scripts/common.sh@354 -- # echo 1 00:11:01.546 06:36:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:01.546 06:36:12 -- scripts/common.sh@365 -- # decimal 2 00:11:01.546 06:36:12 -- scripts/common.sh@352 -- # local d=2 00:11:01.546 06:36:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:01.546 06:36:12 -- scripts/common.sh@354 -- # echo 2 00:11:01.546 06:36:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:01.546 06:36:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:01.546 06:36:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:01.546 06:36:12 -- scripts/common.sh@367 -- # return 0 00:11:01.546 06:36:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:01.546 06:36:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:01.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.546 --rc genhtml_branch_coverage=1 00:11:01.546 --rc genhtml_function_coverage=1 00:11:01.546 --rc genhtml_legend=1 00:11:01.546 --rc geninfo_all_blocks=1 00:11:01.546 --rc geninfo_unexecuted_blocks=1 00:11:01.546 00:11:01.546 ' 00:11:01.546 06:36:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:01.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.546 --rc genhtml_branch_coverage=1 00:11:01.546 --rc genhtml_function_coverage=1 00:11:01.546 --rc genhtml_legend=1 00:11:01.546 --rc geninfo_all_blocks=1 00:11:01.546 --rc geninfo_unexecuted_blocks=1 00:11:01.546 00:11:01.546 ' 00:11:01.546 06:36:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:01.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.546 --rc genhtml_branch_coverage=1 00:11:01.546 --rc genhtml_function_coverage=1 00:11:01.546 --rc genhtml_legend=1 00:11:01.546 --rc geninfo_all_blocks=1 00:11:01.546 --rc geninfo_unexecuted_blocks=1 00:11:01.546 00:11:01.546 ' 00:11:01.546 06:36:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:01.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.546 --rc genhtml_branch_coverage=1 00:11:01.546 --rc genhtml_function_coverage=1 00:11:01.546 --rc genhtml_legend=1 00:11:01.546 --rc geninfo_all_blocks=1 00:11:01.546 --rc geninfo_unexecuted_blocks=1 00:11:01.546 00:11:01.546 ' 00:11:01.546 06:36:12 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:01.546 06:36:12 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:01.546 06:36:12 -- common/autotest_common.sh@1519 -- # bdfs=() 00:11:01.546 06:36:12 -- common/autotest_common.sh@1519 -- # local bdfs 00:11:01.546 06:36:12 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:11:01.546 06:36:12 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:11:01.546 06:36:12 -- common/autotest_common.sh@1508 -- # bdfs=() 00:11:01.546 06:36:12 -- common/autotest_common.sh@1508 -- # local bdfs 00:11:01.546 06:36:12 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:01.546 06:36:12 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:11:01.546 06:36:12 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:01.546 06:36:12 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:11:01.546 06:36:12 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:01.546 06:36:12 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:11:01.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.546 06:36:12 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:11:01.546 06:36:12 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=77928 00:11:01.546 06:36:12 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:01.546 06:36:12 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 77928 00:11:01.546 06:36:12 -- common/autotest_common.sh@829 -- # '[' -z 77928 ']' 00:11:01.546 06:36:12 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:01.546 06:36:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.546 06:36:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:01.546 06:36:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.546 06:36:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:01.546 06:36:12 -- common/autotest_common.sh@10 -- # set +x 00:11:01.546 [2024-11-28 06:36:12.266932] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:01.546 [2024-11-28 06:36:12.267035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77928 ] 00:11:01.808 [2024-11-28 06:36:12.401988] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:01.808 [2024-11-28 06:36:12.433507] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:01.808 [2024-11-28 06:36:12.433789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.808 [2024-11-28 06:36:12.433847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.381 06:36:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:02.382 06:36:13 -- common/autotest_common.sh@862 -- # return 0 00:11:02.382 06:36:13 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:11:02.641 Nvme0n1 00:11:02.641 06:36:13 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:02.641 06:36:13 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:02.900 request: 00:11:02.900 { 00:11:02.900 "filename": "non_existing_file", 00:11:02.900 "bdev_name": "Nvme0n1", 00:11:02.900 "method": "bdev_nvme_apply_firmware", 00:11:02.900 "req_id": 1 00:11:02.900 } 00:11:02.900 Got JSON-RPC error response 00:11:02.900 response: 00:11:02.900 { 00:11:02.900 "code": -32603, 00:11:02.900 "message": "open file failed." 00:11:02.900 } 00:11:02.900 06:36:13 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:02.900 06:36:13 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:02.900 06:36:13 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:03.158 06:36:13 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:03.158 06:36:13 -- nvme/nvme_rpc.sh@40 -- # killprocess 77928 00:11:03.159 06:36:13 -- common/autotest_common.sh@936 -- # '[' -z 77928 ']' 00:11:03.159 06:36:13 -- common/autotest_common.sh@940 -- # kill -0 77928 00:11:03.159 06:36:13 -- common/autotest_common.sh@941 -- # uname 00:11:03.159 06:36:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:03.159 06:36:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77928 00:11:03.159 06:36:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:03.159 killing process with pid 77928 00:11:03.159 06:36:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:03.159 06:36:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77928' 00:11:03.159 06:36:13 -- common/autotest_common.sh@955 -- # kill 77928 00:11:03.159 06:36:13 -- common/autotest_common.sh@960 -- # wait 77928 00:11:03.419 00:11:03.419 real 0m1.948s 00:11:03.419 user 0m3.836s 00:11:03.419 sys 0m0.400s 00:11:03.419 06:36:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:03.419 06:36:13 -- common/autotest_common.sh@10 -- # set +x 00:11:03.419 ************************************ 00:11:03.419 END TEST nvme_rpc 00:11:03.419 ************************************ 00:11:03.419 06:36:13 -- spdk/autotest.sh@234 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:03.419 06:36:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:03.419 06:36:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:03.419 06:36:13 -- common/autotest_common.sh@10 -- # set +x 00:11:03.419 ************************************ 00:11:03.419 START TEST nvme_rpc_timeouts 00:11:03.419 ************************************ 00:11:03.419 06:36:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:03.419 * Looking for test storage... 00:11:03.419 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:03.419 06:36:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:03.419 06:36:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:03.419 06:36:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:03.419 06:36:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:03.419 06:36:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:03.419 06:36:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:03.419 06:36:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:03.419 06:36:14 -- scripts/common.sh@335 -- # IFS=.-: 00:11:03.419 06:36:14 -- scripts/common.sh@335 -- # read -ra ver1 00:11:03.419 06:36:14 -- scripts/common.sh@336 -- # IFS=.-: 00:11:03.419 06:36:14 -- scripts/common.sh@336 -- # read -ra ver2 00:11:03.419 06:36:14 -- scripts/common.sh@337 -- # local 'op=<' 00:11:03.419 06:36:14 -- scripts/common.sh@339 -- # ver1_l=2 00:11:03.419 06:36:14 -- scripts/common.sh@340 -- # ver2_l=1 00:11:03.419 06:36:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:03.419 06:36:14 -- scripts/common.sh@343 -- # case "$op" in 00:11:03.419 06:36:14 -- scripts/common.sh@344 -- # : 1 00:11:03.419 06:36:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:03.419 06:36:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:03.419 06:36:14 -- scripts/common.sh@364 -- # decimal 1 00:11:03.419 06:36:14 -- scripts/common.sh@352 -- # local d=1 00:11:03.419 06:36:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:03.419 06:36:14 -- scripts/common.sh@354 -- # echo 1 00:11:03.419 06:36:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:03.419 06:36:14 -- scripts/common.sh@365 -- # decimal 2 00:11:03.419 06:36:14 -- scripts/common.sh@352 -- # local d=2 00:11:03.419 06:36:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:03.419 06:36:14 -- scripts/common.sh@354 -- # echo 2 00:11:03.419 06:36:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:03.419 06:36:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:03.419 06:36:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:03.419 06:36:14 -- scripts/common.sh@367 -- # return 0 00:11:03.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.419 06:36:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:03.419 06:36:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:03.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.419 --rc genhtml_branch_coverage=1 00:11:03.419 --rc genhtml_function_coverage=1 00:11:03.419 --rc genhtml_legend=1 00:11:03.419 --rc geninfo_all_blocks=1 00:11:03.419 --rc geninfo_unexecuted_blocks=1 00:11:03.419 00:11:03.419 ' 00:11:03.419 06:36:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:03.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.419 --rc genhtml_branch_coverage=1 00:11:03.419 --rc genhtml_function_coverage=1 00:11:03.419 --rc genhtml_legend=1 00:11:03.419 --rc geninfo_all_blocks=1 00:11:03.419 --rc geninfo_unexecuted_blocks=1 00:11:03.419 00:11:03.419 ' 00:11:03.419 06:36:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:03.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.419 --rc genhtml_branch_coverage=1 00:11:03.419 --rc genhtml_function_coverage=1 00:11:03.419 --rc genhtml_legend=1 00:11:03.419 --rc geninfo_all_blocks=1 00:11:03.419 --rc geninfo_unexecuted_blocks=1 00:11:03.419 00:11:03.419 ' 00:11:03.419 06:36:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:03.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.419 --rc genhtml_branch_coverage=1 00:11:03.419 --rc genhtml_function_coverage=1 00:11:03.419 --rc genhtml_legend=1 00:11:03.419 --rc geninfo_all_blocks=1 00:11:03.419 --rc geninfo_unexecuted_blocks=1 00:11:03.419 00:11:03.419 ' 00:11:03.419 06:36:14 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:03.419 06:36:14 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_77981 00:11:03.420 06:36:14 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_77981 00:11:03.420 06:36:14 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=78013 00:11:03.420 06:36:14 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:03.420 06:36:14 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 78013 00:11:03.420 06:36:14 -- common/autotest_common.sh@829 -- # '[' -z 78013 ']' 00:11:03.420 06:36:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.420 06:36:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:03.420 06:36:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.420 06:36:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:03.420 06:36:14 -- common/autotest_common.sh@10 -- # set +x 00:11:03.420 06:36:14 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:03.681 [2024-11-28 06:36:14.203640] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:03.681 [2024-11-28 06:36:14.203754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78013 ] 00:11:03.681 [2024-11-28 06:36:14.337655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:03.681 [2024-11-28 06:36:14.367213] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:03.681 [2024-11-28 06:36:14.367749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.681 [2024-11-28 06:36:14.367801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.255 06:36:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:04.255 Checking default timeout settings: 00:11:04.255 06:36:15 -- common/autotest_common.sh@862 -- # return 0 00:11:04.255 06:36:15 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:04.255 06:36:15 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:04.827 Making settings changes with rpc: 00:11:04.827 06:36:15 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:04.827 06:36:15 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:04.827 Check default vs. modified settings: 00:11:04.827 06:36:15 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:04.827 06:36:15 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:05.086 06:36:15 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:05.086 06:36:15 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:05.086 06:36:15 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_77981 00:11:05.086 06:36:15 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:05.086 06:36:15 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:05.345 06:36:15 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:05.345 06:36:15 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_77981 00:11:05.345 06:36:15 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:05.345 06:36:15 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:05.345 Setting action_on_timeout is changed as expected. 00:11:05.345 06:36:15 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:05.345 06:36:15 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:05.345 06:36:15 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:05.345 06:36:15 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:05.345 06:36:15 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_77981 00:11:05.345 06:36:15 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:05.345 06:36:15 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:05.345 06:36:15 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:05.345 06:36:15 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_77981 00:11:05.345 06:36:15 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:05.345 06:36:15 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:05.345 Setting timeout_us is changed as expected. 00:11:05.345 06:36:15 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:05.345 06:36:15 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:05.345 06:36:15 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:05.345 06:36:15 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:05.345 06:36:15 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_77981 00:11:05.345 06:36:15 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:05.345 06:36:15 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:05.345 06:36:15 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:05.345 06:36:15 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_77981 00:11:05.345 06:36:15 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:05.345 06:36:15 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:05.345 Setting timeout_admin_us is changed as expected. 00:11:05.345 06:36:15 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:05.345 06:36:15 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:05.345 06:36:15 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:05.345 06:36:15 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:05.345 06:36:15 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_77981 /tmp/settings_modified_77981 00:11:05.345 06:36:15 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 78013 00:11:05.345 06:36:15 -- common/autotest_common.sh@936 -- # '[' -z 78013 ']' 00:11:05.345 06:36:15 -- common/autotest_common.sh@940 -- # kill -0 78013 00:11:05.345 06:36:15 -- common/autotest_common.sh@941 -- # uname 00:11:05.345 06:36:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:05.345 06:36:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78013 00:11:05.345 killing process with pid 78013 00:11:05.345 06:36:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:05.345 06:36:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:05.345 06:36:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78013' 00:11:05.345 06:36:15 -- common/autotest_common.sh@955 -- # kill 78013 00:11:05.345 06:36:15 -- common/autotest_common.sh@960 -- # wait 78013 00:11:05.606 RPC TIMEOUT SETTING TEST PASSED. 00:11:05.606 06:36:16 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:05.606 00:11:05.606 real 0m2.215s 00:11:05.606 user 0m4.435s 00:11:05.606 sys 0m0.463s 00:11:05.606 06:36:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:05.606 ************************************ 00:11:05.606 END TEST nvme_rpc_timeouts 00:11:05.606 ************************************ 00:11:05.606 06:36:16 -- common/autotest_common.sh@10 -- # set +x 00:11:05.606 06:36:16 -- spdk/autotest.sh@238 -- # '[' 1 -eq 0 ']' 00:11:05.606 06:36:16 -- spdk/autotest.sh@242 -- # [[ 1 -eq 1 ]] 00:11:05.606 06:36:16 -- spdk/autotest.sh@243 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:05.607 06:36:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:05.607 06:36:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:05.607 06:36:16 -- common/autotest_common.sh@10 -- # set +x 00:11:05.607 ************************************ 00:11:05.607 START TEST nvme_xnvme 00:11:05.607 ************************************ 00:11:05.607 06:36:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:05.607 * Looking for test storage... 00:11:05.607 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:05.607 06:36:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:05.607 06:36:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:05.607 06:36:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:05.868 06:36:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:05.868 06:36:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:05.868 06:36:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:05.868 06:36:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:05.868 06:36:16 -- scripts/common.sh@335 -- # IFS=.-: 00:11:05.868 06:36:16 -- scripts/common.sh@335 -- # read -ra ver1 00:11:05.868 06:36:16 -- scripts/common.sh@336 -- # IFS=.-: 00:11:05.868 06:36:16 -- scripts/common.sh@336 -- # read -ra ver2 00:11:05.868 06:36:16 -- scripts/common.sh@337 -- # local 'op=<' 00:11:05.868 06:36:16 -- scripts/common.sh@339 -- # ver1_l=2 00:11:05.868 06:36:16 -- scripts/common.sh@340 -- # ver2_l=1 00:11:05.868 06:36:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:05.868 06:36:16 -- scripts/common.sh@343 -- # case "$op" in 00:11:05.868 06:36:16 -- scripts/common.sh@344 -- # : 1 00:11:05.868 06:36:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:05.868 06:36:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:05.868 06:36:16 -- scripts/common.sh@364 -- # decimal 1 00:11:05.868 06:36:16 -- scripts/common.sh@352 -- # local d=1 00:11:05.868 06:36:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:05.868 06:36:16 -- scripts/common.sh@354 -- # echo 1 00:11:05.868 06:36:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:05.868 06:36:16 -- scripts/common.sh@365 -- # decimal 2 00:11:05.868 06:36:16 -- scripts/common.sh@352 -- # local d=2 00:11:05.868 06:36:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:05.868 06:36:16 -- scripts/common.sh@354 -- # echo 2 00:11:05.869 06:36:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:05.869 06:36:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:05.869 06:36:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:05.869 06:36:16 -- scripts/common.sh@367 -- # return 0 00:11:05.869 06:36:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:05.869 06:36:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:05.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.869 --rc genhtml_branch_coverage=1 00:11:05.869 --rc genhtml_function_coverage=1 00:11:05.869 --rc genhtml_legend=1 00:11:05.869 --rc geninfo_all_blocks=1 00:11:05.869 --rc geninfo_unexecuted_blocks=1 00:11:05.869 00:11:05.869 ' 00:11:05.869 06:36:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:05.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.869 --rc genhtml_branch_coverage=1 00:11:05.869 --rc genhtml_function_coverage=1 00:11:05.869 --rc genhtml_legend=1 00:11:05.869 --rc geninfo_all_blocks=1 00:11:05.869 --rc geninfo_unexecuted_blocks=1 00:11:05.869 00:11:05.869 ' 00:11:05.869 06:36:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:05.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.869 --rc genhtml_branch_coverage=1 00:11:05.869 --rc genhtml_function_coverage=1 00:11:05.869 --rc genhtml_legend=1 00:11:05.869 --rc geninfo_all_blocks=1 00:11:05.869 --rc geninfo_unexecuted_blocks=1 00:11:05.869 00:11:05.869 ' 00:11:05.869 06:36:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:05.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.869 --rc genhtml_branch_coverage=1 00:11:05.869 --rc genhtml_function_coverage=1 00:11:05.869 --rc genhtml_legend=1 00:11:05.869 --rc geninfo_all_blocks=1 00:11:05.869 --rc geninfo_unexecuted_blocks=1 00:11:05.869 00:11:05.869 ' 00:11:05.869 06:36:16 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:05.869 06:36:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.869 06:36:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.869 06:36:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.869 06:36:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.869 06:36:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.869 06:36:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.869 06:36:16 -- paths/export.sh@5 -- # export PATH 00:11:05.869 06:36:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.869 06:36:16 -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:11:05.869 06:36:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:05.869 06:36:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:05.869 06:36:16 -- common/autotest_common.sh@10 -- # set +x 00:11:05.869 ************************************ 00:11:05.869 START TEST xnvme_to_malloc_dd_copy 00:11:05.869 ************************************ 00:11:05.869 06:36:16 -- common/autotest_common.sh@1114 -- # malloc_to_xnvme_copy 00:11:05.869 06:36:16 -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:11:05.869 06:36:16 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:11:05.869 06:36:16 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:11:05.869 06:36:16 -- dd/common.sh@191 -- # return 00:11:05.869 06:36:16 -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:11:05.869 06:36:16 -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:11:05.869 06:36:16 -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:05.869 06:36:16 -- xnvme/xnvme.sh@18 -- # local io 00:11:05.869 06:36:16 -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:11:05.869 06:36:16 -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:11:05.869 06:36:16 -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:11:05.869 06:36:16 -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:11:05.869 06:36:16 -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:11:05.869 06:36:16 -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:11:05.869 06:36:16 -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:11:05.869 06:36:16 -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:11:05.869 06:36:16 -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:05.869 06:36:16 -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:05.869 06:36:16 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:05.869 06:36:16 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:05.869 06:36:16 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:05.869 06:36:16 -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:05.869 06:36:16 -- dd/common.sh@31 -- # xtrace_disable 00:11:05.869 06:36:16 -- common/autotest_common.sh@10 -- # set +x 00:11:05.869 { 00:11:05.869 "subsystems": [ 00:11:05.869 { 00:11:05.869 "subsystem": "bdev", 00:11:05.869 "config": [ 00:11:05.869 { 00:11:05.869 "params": { 00:11:05.869 "block_size": 512, 00:11:05.869 "num_blocks": 2097152, 00:11:05.869 "name": "malloc0" 00:11:05.869 }, 00:11:05.869 "method": "bdev_malloc_create" 00:11:05.869 }, 00:11:05.869 { 00:11:05.869 "params": { 00:11:05.869 "io_mechanism": "libaio", 00:11:05.869 "filename": "/dev/nullb0", 00:11:05.869 "name": "null0" 00:11:05.869 }, 00:11:05.869 "method": "bdev_xnvme_create" 00:11:05.869 }, 00:11:05.869 { 00:11:05.869 "method": "bdev_wait_for_examine" 00:11:05.869 } 00:11:05.869 ] 00:11:05.869 } 00:11:05.869 ] 00:11:05.869 } 00:11:05.869 [2024-11-28 06:36:16.546924] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:05.869 [2024-11-28 06:36:16.547055] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78131 ] 00:11:06.131 [2024-11-28 06:36:16.683494] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.131 [2024-11-28 06:36:16.734402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.516  [2024-11-28T06:36:19.324Z] Copying: 217/1024 [MB] (217 MBps) [2024-11-28T06:36:20.292Z] Copying: 445/1024 [MB] (228 MBps) [2024-11-28T06:36:21.236Z] Copying: 742/1024 [MB] (296 MBps) [2024-11-28T06:36:21.496Z] Copying: 1024/1024 [MB] (average 263 MBps) 00:11:10.726 00:11:10.726 06:36:21 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:10.726 06:36:21 -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:10.726 06:36:21 -- dd/common.sh@31 -- # xtrace_disable 00:11:10.726 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:11:10.726 { 00:11:10.726 "subsystems": [ 00:11:10.726 { 00:11:10.726 "subsystem": "bdev", 00:11:10.726 "config": [ 00:11:10.726 { 00:11:10.726 "params": { 00:11:10.726 "block_size": 512, 00:11:10.726 "num_blocks": 2097152, 00:11:10.726 "name": "malloc0" 00:11:10.726 }, 00:11:10.726 "method": "bdev_malloc_create" 00:11:10.726 }, 00:11:10.726 { 00:11:10.726 "params": { 00:11:10.726 "io_mechanism": "libaio", 00:11:10.726 "filename": "/dev/nullb0", 00:11:10.726 "name": "null0" 00:11:10.726 }, 00:11:10.726 "method": "bdev_xnvme_create" 00:11:10.726 }, 00:11:10.726 { 00:11:10.726 "method": "bdev_wait_for_examine" 00:11:10.726 } 00:11:10.726 ] 00:11:10.726 } 00:11:10.726 ] 00:11:10.726 } 00:11:10.726 [2024-11-28 06:36:21.372372] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:10.726 [2024-11-28 06:36:21.372478] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78192 ] 00:11:10.984 [2024-11-28 06:36:21.503939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.984 [2024-11-28 06:36:21.531439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.360  [2024-11-28T06:36:24.065Z] Copying: 315/1024 [MB] (315 MBps) [2024-11-28T06:36:24.999Z] Copying: 633/1024 [MB] (317 MBps) [2024-11-28T06:36:24.999Z] Copying: 950/1024 [MB] (317 MBps) [2024-11-28T06:36:25.574Z] Copying: 1024/1024 [MB] (average 316 MBps) 00:11:14.804 00:11:14.804 06:36:25 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:14.804 06:36:25 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:11:14.804 06:36:25 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:14.804 06:36:25 -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:14.804 06:36:25 -- dd/common.sh@31 -- # xtrace_disable 00:11:14.804 06:36:25 -- common/autotest_common.sh@10 -- # set +x 00:11:14.804 { 00:11:14.804 "subsystems": [ 00:11:14.804 { 00:11:14.804 "subsystem": "bdev", 00:11:14.804 "config": [ 00:11:14.804 { 00:11:14.804 "params": { 00:11:14.804 "block_size": 512, 00:11:14.804 "num_blocks": 2097152, 00:11:14.804 "name": "malloc0" 00:11:14.804 }, 00:11:14.804 "method": "bdev_malloc_create" 00:11:14.804 }, 00:11:14.804 { 00:11:14.804 "params": { 00:11:14.804 "io_mechanism": "io_uring", 00:11:14.804 "filename": "/dev/nullb0", 00:11:14.804 "name": "null0" 00:11:14.804 }, 00:11:14.804 "method": "bdev_xnvme_create" 00:11:14.804 }, 00:11:14.804 { 00:11:14.804 "method": "bdev_wait_for_examine" 00:11:14.804 } 00:11:14.804 ] 00:11:14.804 } 00:11:14.804 ] 00:11:14.804 } 00:11:14.804 [2024-11-28 06:36:25.367042] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:14.804 [2024-11-28 06:36:25.367159] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78246 ] 00:11:14.804 [2024-11-28 06:36:25.500232] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.804 [2024-11-28 06:36:25.527130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.179  [2024-11-28T06:36:27.884Z] Copying: 325/1024 [MB] (325 MBps) [2024-11-28T06:36:28.818Z] Copying: 650/1024 [MB] (325 MBps) [2024-11-28T06:36:29.076Z] Copying: 976/1024 [MB] (325 MBps) [2024-11-28T06:36:29.335Z] Copying: 1024/1024 [MB] (average 325 MBps) 00:11:18.565 00:11:18.565 06:36:29 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:18.565 06:36:29 -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:18.565 06:36:29 -- dd/common.sh@31 -- # xtrace_disable 00:11:18.565 06:36:29 -- common/autotest_common.sh@10 -- # set +x 00:11:18.565 { 00:11:18.565 "subsystems": [ 00:11:18.565 { 00:11:18.565 "subsystem": "bdev", 00:11:18.565 "config": [ 00:11:18.565 { 00:11:18.565 "params": { 00:11:18.565 "block_size": 512, 00:11:18.565 "num_blocks": 2097152, 00:11:18.565 "name": "malloc0" 00:11:18.565 }, 00:11:18.565 "method": "bdev_malloc_create" 00:11:18.565 }, 00:11:18.565 { 00:11:18.565 "params": { 00:11:18.565 "io_mechanism": "io_uring", 00:11:18.565 "filename": "/dev/nullb0", 00:11:18.565 "name": "null0" 00:11:18.565 }, 00:11:18.565 "method": "bdev_xnvme_create" 00:11:18.565 }, 00:11:18.565 { 00:11:18.565 "method": "bdev_wait_for_examine" 00:11:18.565 } 00:11:18.565 ] 00:11:18.565 } 00:11:18.565 ] 00:11:18.565 } 00:11:18.565 [2024-11-28 06:36:29.239750] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:18.565 [2024-11-28 06:36:29.239967] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78301 ] 00:11:18.824 [2024-11-28 06:36:29.374667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.824 [2024-11-28 06:36:29.401376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.198  [2024-11-28T06:36:31.903Z] Copying: 330/1024 [MB] (330 MBps) [2024-11-28T06:36:32.838Z] Copying: 661/1024 [MB] (331 MBps) [2024-11-28T06:36:32.839Z] Copying: 992/1024 [MB] (330 MBps) [2024-11-28T06:36:33.097Z] Copying: 1024/1024 [MB] (average 330 MBps) 00:11:22.327 00:11:22.327 06:36:33 -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:11:22.327 06:36:33 -- dd/common.sh@195 -- # modprobe -r null_blk 00:11:22.327 00:11:22.327 real 0m16.597s 00:11:22.327 user 0m13.700s 00:11:22.327 sys 0m2.397s 00:11:22.327 06:36:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:22.327 ************************************ 00:11:22.327 END TEST xnvme_to_malloc_dd_copy 00:11:22.327 ************************************ 00:11:22.327 06:36:33 -- common/autotest_common.sh@10 -- # set +x 00:11:22.327 06:36:33 -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:11:22.327 06:36:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:22.327 06:36:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:22.327 06:36:33 -- common/autotest_common.sh@10 -- # set +x 00:11:22.586 ************************************ 00:11:22.586 START TEST xnvme_bdevperf 00:11:22.586 ************************************ 00:11:22.586 06:36:33 -- common/autotest_common.sh@1114 -- # xnvme_bdevperf 00:11:22.586 06:36:33 -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:11:22.586 06:36:33 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:11:22.586 06:36:33 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:11:22.586 06:36:33 -- dd/common.sh@191 -- # return 00:11:22.586 06:36:33 -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:11:22.586 06:36:33 -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:22.586 06:36:33 -- xnvme/xnvme.sh@60 -- # local io 00:11:22.586 06:36:33 -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:11:22.586 06:36:33 -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:11:22.586 06:36:33 -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:11:22.586 06:36:33 -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:11:22.586 06:36:33 -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:11:22.586 06:36:33 -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:22.586 06:36:33 -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:22.586 06:36:33 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:11:22.586 06:36:33 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:22.586 06:36:33 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:11:22.586 06:36:33 -- xnvme/xnvme.sh@74 -- # gen_conf 00:11:22.586 06:36:33 -- dd/common.sh@31 -- # xtrace_disable 00:11:22.586 06:36:33 -- common/autotest_common.sh@10 -- # set +x 00:11:22.586 { 00:11:22.586 "subsystems": [ 00:11:22.586 { 00:11:22.586 "subsystem": "bdev", 00:11:22.586 "config": [ 00:11:22.586 { 00:11:22.586 "params": { 00:11:22.586 "io_mechanism": "libaio", 00:11:22.586 "filename": "/dev/nullb0", 00:11:22.586 "name": "null0" 00:11:22.586 }, 00:11:22.586 "method": "bdev_xnvme_create" 00:11:22.586 }, 00:11:22.586 { 00:11:22.586 "method": "bdev_wait_for_examine" 00:11:22.586 } 00:11:22.586 ] 00:11:22.586 } 00:11:22.586 ] 00:11:22.586 } 00:11:22.586 [2024-11-28 06:36:33.181796] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:22.586 [2024-11-28 06:36:33.181881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78373 ] 00:11:22.586 [2024-11-28 06:36:33.312134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.586 [2024-11-28 06:36:33.342527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.844 Running I/O for 5 seconds... 00:11:28.113 00:11:28.113 Latency(us) 00:11:28.114 [2024-11-28T06:36:38.884Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:28.114 [2024-11-28T06:36:38.884Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:11:28.114 null0 : 5.00 194539.57 759.92 0.00 0.00 326.69 111.85 475.77 00:11:28.114 [2024-11-28T06:36:38.884Z] =================================================================================================================== 00:11:28.114 [2024-11-28T06:36:38.884Z] Total : 194539.57 759.92 0.00 0.00 326.69 111.85 475.77 00:11:28.114 06:36:38 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:11:28.114 06:36:38 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:11:28.114 06:36:38 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:11:28.114 06:36:38 -- xnvme/xnvme.sh@74 -- # gen_conf 00:11:28.114 06:36:38 -- dd/common.sh@31 -- # xtrace_disable 00:11:28.114 06:36:38 -- common/autotest_common.sh@10 -- # set +x 00:11:28.114 { 00:11:28.114 "subsystems": [ 00:11:28.114 { 00:11:28.114 "subsystem": "bdev", 00:11:28.114 "config": [ 00:11:28.114 { 00:11:28.114 "params": { 00:11:28.114 "io_mechanism": "io_uring", 00:11:28.114 "filename": "/dev/nullb0", 00:11:28.114 "name": "null0" 00:11:28.114 }, 00:11:28.114 "method": "bdev_xnvme_create" 00:11:28.114 }, 00:11:28.114 { 00:11:28.114 "method": "bdev_wait_for_examine" 00:11:28.114 } 00:11:28.114 ] 00:11:28.114 } 00:11:28.114 ] 00:11:28.114 } 00:11:28.114 [2024-11-28 06:36:38.619065] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:28.114 [2024-11-28 06:36:38.619287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78440 ] 00:11:28.114 [2024-11-28 06:36:38.754121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.114 [2024-11-28 06:36:38.780783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.114 Running I/O for 5 seconds... 00:11:33.412 00:11:33.412 Latency(us) 00:11:33.412 [2024-11-28T06:36:44.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:33.412 [2024-11-28T06:36:44.182Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:11:33.412 null0 : 5.00 242568.03 947.53 0.00 0.00 261.62 159.11 419.05 00:11:33.412 [2024-11-28T06:36:44.182Z] =================================================================================================================== 00:11:33.412 [2024-11-28T06:36:44.182Z] Total : 242568.03 947.53 0.00 0.00 261.62 159.11 419.05 00:11:33.412 06:36:43 -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:11:33.412 06:36:43 -- dd/common.sh@195 -- # modprobe -r null_blk 00:11:33.412 ************************************ 00:11:33.412 END TEST xnvme_bdevperf 00:11:33.412 ************************************ 00:11:33.412 00:11:33.412 real 0m10.894s 00:11:33.412 user 0m8.613s 00:11:33.412 sys 0m2.057s 00:11:33.412 06:36:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:33.412 06:36:44 -- common/autotest_common.sh@10 -- # set +x 00:11:33.412 ************************************ 00:11:33.412 END TEST nvme_xnvme 00:11:33.412 ************************************ 00:11:33.412 00:11:33.412 real 0m27.747s 00:11:33.412 user 0m22.414s 00:11:33.412 sys 0m4.581s 00:11:33.412 06:36:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:33.412 06:36:44 -- common/autotest_common.sh@10 -- # set +x 00:11:33.412 06:36:44 -- spdk/autotest.sh@244 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:11:33.412 06:36:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:33.412 06:36:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:33.412 06:36:44 -- common/autotest_common.sh@10 -- # set +x 00:11:33.412 ************************************ 00:11:33.412 START TEST blockdev_xnvme 00:11:33.412 ************************************ 00:11:33.412 06:36:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:11:33.412 * Looking for test storage... 00:11:33.412 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:33.412 06:36:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:33.412 06:36:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:33.412 06:36:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:33.671 06:36:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:33.671 06:36:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:33.671 06:36:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:33.671 06:36:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:33.671 06:36:44 -- scripts/common.sh@335 -- # IFS=.-: 00:11:33.671 06:36:44 -- scripts/common.sh@335 -- # read -ra ver1 00:11:33.671 06:36:44 -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.671 06:36:44 -- scripts/common.sh@336 -- # read -ra ver2 00:11:33.671 06:36:44 -- scripts/common.sh@337 -- # local 'op=<' 00:11:33.671 06:36:44 -- scripts/common.sh@339 -- # ver1_l=2 00:11:33.671 06:36:44 -- scripts/common.sh@340 -- # ver2_l=1 00:11:33.671 06:36:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:33.671 06:36:44 -- scripts/common.sh@343 -- # case "$op" in 00:11:33.671 06:36:44 -- scripts/common.sh@344 -- # : 1 00:11:33.671 06:36:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:33.671 06:36:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.671 06:36:44 -- scripts/common.sh@364 -- # decimal 1 00:11:33.671 06:36:44 -- scripts/common.sh@352 -- # local d=1 00:11:33.671 06:36:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.671 06:36:44 -- scripts/common.sh@354 -- # echo 1 00:11:33.671 06:36:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:33.671 06:36:44 -- scripts/common.sh@365 -- # decimal 2 00:11:33.671 06:36:44 -- scripts/common.sh@352 -- # local d=2 00:11:33.671 06:36:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.671 06:36:44 -- scripts/common.sh@354 -- # echo 2 00:11:33.671 06:36:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:33.671 06:36:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:33.671 06:36:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:33.671 06:36:44 -- scripts/common.sh@367 -- # return 0 00:11:33.671 06:36:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.671 06:36:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:33.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.671 --rc genhtml_branch_coverage=1 00:11:33.671 --rc genhtml_function_coverage=1 00:11:33.671 --rc genhtml_legend=1 00:11:33.671 --rc geninfo_all_blocks=1 00:11:33.671 --rc geninfo_unexecuted_blocks=1 00:11:33.671 00:11:33.671 ' 00:11:33.671 06:36:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:33.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.671 --rc genhtml_branch_coverage=1 00:11:33.671 --rc genhtml_function_coverage=1 00:11:33.671 --rc genhtml_legend=1 00:11:33.671 --rc geninfo_all_blocks=1 00:11:33.671 --rc geninfo_unexecuted_blocks=1 00:11:33.671 00:11:33.671 ' 00:11:33.672 06:36:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:33.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.672 --rc genhtml_branch_coverage=1 00:11:33.672 --rc genhtml_function_coverage=1 00:11:33.672 --rc genhtml_legend=1 00:11:33.672 --rc geninfo_all_blocks=1 00:11:33.672 --rc geninfo_unexecuted_blocks=1 00:11:33.672 00:11:33.672 ' 00:11:33.672 06:36:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:33.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.672 --rc genhtml_branch_coverage=1 00:11:33.672 --rc genhtml_function_coverage=1 00:11:33.672 --rc genhtml_legend=1 00:11:33.672 --rc geninfo_all_blocks=1 00:11:33.672 --rc geninfo_unexecuted_blocks=1 00:11:33.672 00:11:33.672 ' 00:11:33.672 06:36:44 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:33.672 06:36:44 -- bdev/nbd_common.sh@6 -- # set -e 00:11:33.672 06:36:44 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:33.672 06:36:44 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:33.672 06:36:44 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:33.672 06:36:44 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:33.672 06:36:44 -- bdev/blockdev.sh@18 -- # : 00:11:33.672 06:36:44 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:11:33.672 06:36:44 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:11:33.672 06:36:44 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:11:33.672 06:36:44 -- bdev/blockdev.sh@672 -- # uname -s 00:11:33.672 06:36:44 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:11:33.672 06:36:44 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:11:33.672 06:36:44 -- bdev/blockdev.sh@680 -- # test_type=xnvme 00:11:33.672 06:36:44 -- bdev/blockdev.sh@681 -- # crypto_device= 00:11:33.672 06:36:44 -- bdev/blockdev.sh@682 -- # dek= 00:11:33.672 06:36:44 -- bdev/blockdev.sh@683 -- # env_ctx= 00:11:33.672 06:36:44 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:11:33.672 06:36:44 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:11:33.672 06:36:44 -- bdev/blockdev.sh@688 -- # [[ xnvme == bdev ]] 00:11:33.672 06:36:44 -- bdev/blockdev.sh@688 -- # [[ xnvme == crypto_* ]] 00:11:33.672 06:36:44 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:11:33.672 06:36:44 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=78577 00:11:33.672 06:36:44 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:33.672 06:36:44 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:33.672 06:36:44 -- bdev/blockdev.sh@47 -- # waitforlisten 78577 00:11:33.672 06:36:44 -- common/autotest_common.sh@829 -- # '[' -z 78577 ']' 00:11:33.672 06:36:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.672 06:36:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:33.672 06:36:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.672 06:36:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:33.672 06:36:44 -- common/autotest_common.sh@10 -- # set +x 00:11:33.672 [2024-11-28 06:36:44.287017] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:33.672 [2024-11-28 06:36:44.287279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78577 ] 00:11:33.672 [2024-11-28 06:36:44.420899] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.931 [2024-11-28 06:36:44.448086] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:33.931 [2024-11-28 06:36:44.448364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.502 06:36:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:34.502 06:36:45 -- common/autotest_common.sh@862 -- # return 0 00:11:34.502 06:36:45 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:11:34.502 06:36:45 -- bdev/blockdev.sh@727 -- # setup_xnvme_conf 00:11:34.502 06:36:45 -- bdev/blockdev.sh@86 -- # local io_mechanism=io_uring 00:11:34.502 06:36:45 -- bdev/blockdev.sh@87 -- # local nvme nvmes 00:11:34.502 06:36:45 -- bdev/blockdev.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:34.762 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:35.020 Waiting for block devices as requested 00:11:35.020 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:35.020 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:35.020 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:35.020 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:40.288 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:40.288 06:36:50 -- bdev/blockdev.sh@90 -- # get_zoned_devs 00:11:40.288 06:36:50 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:11:40.288 06:36:50 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:11:40.288 06:36:50 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:11:40.288 06:36:50 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:40.288 06:36:50 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:11:40.288 06:36:50 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:11:40.288 06:36:50 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:11:40.288 06:36:50 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:40.288 06:36:50 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:40.288 06:36:50 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:11:40.288 06:36:50 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:11:40.288 06:36:50 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:40.288 06:36:50 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:40.288 06:36:50 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:40.288 06:36:50 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:11:40.288 06:36:50 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:11:40.288 06:36:50 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:40.288 06:36:50 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:40.288 06:36:50 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:40.288 06:36:50 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:11:40.288 06:36:50 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:11:40.288 06:36:50 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:11:40.288 06:36:50 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:40.288 06:36:50 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:40.288 06:36:50 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:11:40.288 06:36:50 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:11:40.288 06:36:50 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:11:40.288 06:36:50 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:40.288 06:36:50 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:40.288 06:36:50 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:11:40.288 06:36:50 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:11:40.288 06:36:50 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:11:40.288 06:36:50 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:40.288 06:36:50 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:40.288 06:36:50 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:11:40.288 06:36:50 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:11:40.288 06:36:50 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:11:40.288 06:36:50 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:40.288 06:36:50 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:40.288 06:36:50 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme0n1 ]] 00:11:40.288 06:36:50 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:40.288 06:36:50 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:40.288 06:36:50 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:40.288 06:36:50 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n1 ]] 00:11:40.288 06:36:50 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:40.288 06:36:50 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:40.288 06:36:50 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:40.288 06:36:50 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n2 ]] 00:11:40.288 06:36:50 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:40.288 06:36:50 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:40.288 06:36:50 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:40.288 06:36:50 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n3 ]] 00:11:40.288 06:36:50 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:40.288 06:36:50 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:40.288 06:36:50 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:40.288 06:36:50 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme2n1 ]] 00:11:40.288 06:36:50 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:40.288 06:36:50 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:40.288 06:36:50 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:40.288 06:36:50 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme3n1 ]] 00:11:40.288 06:36:50 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:40.288 06:36:50 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:40.288 06:36:50 -- bdev/blockdev.sh@97 -- # (( 6 > 0 )) 00:11:40.288 06:36:50 -- bdev/blockdev.sh@98 -- # rpc_cmd 00:11:40.288 06:36:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.288 06:36:50 -- common/autotest_common.sh@10 -- # set +x 00:11:40.288 06:36:50 -- bdev/blockdev.sh@98 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme1n2 nvme1n2 io_uring' 'bdev_xnvme_create /dev/nvme1n3 nvme1n3 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:11:40.288 nvme0n1 00:11:40.288 nvme1n1 00:11:40.288 nvme1n2 00:11:40.288 nvme1n3 00:11:40.288 nvme2n1 00:11:40.288 nvme3n1 00:11:40.289 06:36:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.289 06:36:50 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:11:40.289 06:36:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.289 06:36:50 -- common/autotest_common.sh@10 -- # set +x 00:11:40.289 06:36:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.289 06:36:50 -- bdev/blockdev.sh@738 -- # cat 00:11:40.289 06:36:50 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:11:40.289 06:36:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.289 06:36:50 -- common/autotest_common.sh@10 -- # set +x 00:11:40.289 06:36:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.289 06:36:50 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:11:40.289 06:36:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.289 06:36:50 -- common/autotest_common.sh@10 -- # set +x 00:11:40.289 06:36:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.289 06:36:50 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:40.289 06:36:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.289 06:36:50 -- common/autotest_common.sh@10 -- # set +x 00:11:40.289 06:36:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.289 06:36:50 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:11:40.289 06:36:50 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:11:40.289 06:36:50 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:11:40.289 06:36:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.289 06:36:50 -- common/autotest_common.sh@10 -- # set +x 00:11:40.289 06:36:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.289 06:36:50 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:11:40.289 06:36:51 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "c3f6356d-2f91-49f5-a65a-8fa5ba8ba9c0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "c3f6356d-2f91-49f5-a65a-8fa5ba8ba9c0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "48a93659-a57d-4c55-b4ed-24578b534d6b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "48a93659-a57d-4c55-b4ed-24578b534d6b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "dce9fbf9-e649-4906-8e10-39d7baddfb6c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dce9fbf9-e649-4906-8e10-39d7baddfb6c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "d161159e-3ebd-4bb0-8b8b-3f57b2876c28"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d161159e-3ebd-4bb0-8b8b-3f57b2876c28",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "d64909cf-5f76-4dba-945b-da0ce02edd4c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d64909cf-5f76-4dba-945b-da0ce02edd4c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "a28b19a6-8dd6-4791-818c-2ea894119f16"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a28b19a6-8dd6-4791-818c-2ea894119f16",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:11:40.289 06:36:51 -- bdev/blockdev.sh@747 -- # jq -r .name 00:11:40.289 06:36:51 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:11:40.289 06:36:51 -- bdev/blockdev.sh@750 -- # hello_world_bdev=nvme0n1 00:11:40.289 06:36:51 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:11:40.289 06:36:51 -- bdev/blockdev.sh@752 -- # killprocess 78577 00:11:40.289 06:36:51 -- common/autotest_common.sh@936 -- # '[' -z 78577 ']' 00:11:40.289 06:36:51 -- common/autotest_common.sh@940 -- # kill -0 78577 00:11:40.289 06:36:51 -- common/autotest_common.sh@941 -- # uname 00:11:40.289 06:36:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:40.289 06:36:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78577 00:11:40.289 killing process with pid 78577 00:11:40.289 06:36:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:40.289 06:36:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:40.289 06:36:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78577' 00:11:40.289 06:36:51 -- common/autotest_common.sh@955 -- # kill 78577 00:11:40.289 06:36:51 -- common/autotest_common.sh@960 -- # wait 78577 00:11:40.547 06:36:51 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:40.547 06:36:51 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:11:40.547 06:36:51 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:11:40.547 06:36:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:40.547 06:36:51 -- common/autotest_common.sh@10 -- # set +x 00:11:40.547 ************************************ 00:11:40.547 START TEST bdev_hello_world 00:11:40.547 ************************************ 00:11:40.547 06:36:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:11:40.805 [2024-11-28 06:36:51.334772] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:40.805 [2024-11-28 06:36:51.335000] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78941 ] 00:11:40.805 [2024-11-28 06:36:51.471377] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.805 [2024-11-28 06:36:51.504595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.063 [2024-11-28 06:36:51.667400] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:41.063 [2024-11-28 06:36:51.667451] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:11:41.063 [2024-11-28 06:36:51.667470] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:41.063 [2024-11-28 06:36:51.669390] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:41.063 [2024-11-28 06:36:51.669759] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:41.063 [2024-11-28 06:36:51.669782] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:41.063 [2024-11-28 06:36:51.670177] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:41.063 00:11:41.063 [2024-11-28 06:36:51.670284] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:41.063 ************************************ 00:11:41.063 END TEST bdev_hello_world 00:11:41.063 ************************************ 00:11:41.063 00:11:41.063 real 0m0.536s 00:11:41.063 user 0m0.281s 00:11:41.063 sys 0m0.142s 00:11:41.063 06:36:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:41.063 06:36:51 -- common/autotest_common.sh@10 -- # set +x 00:11:41.322 06:36:51 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:11:41.322 06:36:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:41.322 06:36:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:41.322 06:36:51 -- common/autotest_common.sh@10 -- # set +x 00:11:41.322 ************************************ 00:11:41.322 START TEST bdev_bounds 00:11:41.322 ************************************ 00:11:41.322 06:36:51 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:11:41.322 06:36:51 -- bdev/blockdev.sh@288 -- # bdevio_pid=78966 00:11:41.322 06:36:51 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:41.322 Process bdevio pid: 78966 00:11:41.322 06:36:51 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 78966' 00:11:41.322 06:36:51 -- bdev/blockdev.sh@291 -- # waitforlisten 78966 00:11:41.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.322 06:36:51 -- common/autotest_common.sh@829 -- # '[' -z 78966 ']' 00:11:41.322 06:36:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.322 06:36:51 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:41.322 06:36:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:41.322 06:36:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.322 06:36:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:41.322 06:36:51 -- common/autotest_common.sh@10 -- # set +x 00:11:41.322 [2024-11-28 06:36:51.934857] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:41.322 [2024-11-28 06:36:51.934968] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78966 ] 00:11:41.322 [2024-11-28 06:36:52.070244] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:41.580 [2024-11-28 06:36:52.103094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.580 [2024-11-28 06:36:52.103675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:41.580 [2024-11-28 06:36:52.103737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.147 06:36:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:42.147 06:36:52 -- common/autotest_common.sh@862 -- # return 0 00:11:42.147 06:36:52 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:42.147 I/O targets: 00:11:42.147 nvme0n1: 262144 blocks of 4096 bytes (1024 MiB) 00:11:42.147 nvme1n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:42.147 nvme1n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:42.147 nvme1n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:42.147 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:11:42.147 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:11:42.147 00:11:42.147 00:11:42.147 CUnit - A unit testing framework for C - Version 2.1-3 00:11:42.147 http://cunit.sourceforge.net/ 00:11:42.147 00:11:42.147 00:11:42.147 Suite: bdevio tests on: nvme3n1 00:11:42.147 Test: blockdev write read block ...passed 00:11:42.147 Test: blockdev write zeroes read block ...passed 00:11:42.147 Test: blockdev write zeroes read no split ...passed 00:11:42.147 Test: blockdev write zeroes read split ...passed 00:11:42.147 Test: blockdev write zeroes read split partial ...passed 00:11:42.147 Test: blockdev reset ...passed 00:11:42.147 Test: blockdev write read 8 blocks ...passed 00:11:42.147 Test: blockdev write read size > 128k ...passed 00:11:42.147 Test: blockdev write read invalid size ...passed 00:11:42.147 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:42.147 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:42.147 Test: blockdev write read max offset ...passed 00:11:42.147 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:42.147 Test: blockdev writev readv 8 blocks ...passed 00:11:42.147 Test: blockdev writev readv 30 x 1block ...passed 00:11:42.147 Test: blockdev writev readv block ...passed 00:11:42.147 Test: blockdev writev readv size > 128k ...passed 00:11:42.147 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:42.147 Test: blockdev comparev and writev ...passed 00:11:42.147 Test: blockdev nvme passthru rw ...passed 00:11:42.147 Test: blockdev nvme passthru vendor specific ...passed 00:11:42.147 Test: blockdev nvme admin passthru ...passed 00:11:42.147 Test: blockdev copy ...passed 00:11:42.147 Suite: bdevio tests on: nvme2n1 00:11:42.147 Test: blockdev write read block ...passed 00:11:42.147 Test: blockdev write zeroes read block ...passed 00:11:42.147 Test: blockdev write zeroes read no split ...passed 00:11:42.147 Test: blockdev write zeroes read split ...passed 00:11:42.147 Test: blockdev write zeroes read split partial ...passed 00:11:42.147 Test: blockdev reset ...passed 00:11:42.147 Test: blockdev write read 8 blocks ...passed 00:11:42.147 Test: blockdev write read size > 128k ...passed 00:11:42.147 Test: blockdev write read invalid size ...passed 00:11:42.147 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:42.147 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:42.147 Test: blockdev write read max offset ...passed 00:11:42.147 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:42.147 Test: blockdev writev readv 8 blocks ...passed 00:11:42.147 Test: blockdev writev readv 30 x 1block ...passed 00:11:42.147 Test: blockdev writev readv block ...passed 00:11:42.147 Test: blockdev writev readv size > 128k ...passed 00:11:42.147 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:42.147 Test: blockdev comparev and writev ...passed 00:11:42.147 Test: blockdev nvme passthru rw ...passed 00:11:42.147 Test: blockdev nvme passthru vendor specific ...passed 00:11:42.147 Test: blockdev nvme admin passthru ...passed 00:11:42.147 Test: blockdev copy ...passed 00:11:42.147 Suite: bdevio tests on: nvme1n3 00:11:42.147 Test: blockdev write read block ...passed 00:11:42.147 Test: blockdev write zeroes read block ...passed 00:11:42.147 Test: blockdev write zeroes read no split ...passed 00:11:42.148 Test: blockdev write zeroes read split ...passed 00:11:42.148 Test: blockdev write zeroes read split partial ...passed 00:11:42.148 Test: blockdev reset ...passed 00:11:42.148 Test: blockdev write read 8 blocks ...passed 00:11:42.148 Test: blockdev write read size > 128k ...passed 00:11:42.148 Test: blockdev write read invalid size ...passed 00:11:42.148 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:42.148 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:42.148 Test: blockdev write read max offset ...passed 00:11:42.148 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:42.148 Test: blockdev writev readv 8 blocks ...passed 00:11:42.148 Test: blockdev writev readv 30 x 1block ...passed 00:11:42.148 Test: blockdev writev readv block ...passed 00:11:42.148 Test: blockdev writev readv size > 128k ...passed 00:11:42.148 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:42.148 Test: blockdev comparev and writev ...passed 00:11:42.148 Test: blockdev nvme passthru rw ...passed 00:11:42.148 Test: blockdev nvme passthru vendor specific ...passed 00:11:42.148 Test: blockdev nvme admin passthru ...passed 00:11:42.148 Test: blockdev copy ...passed 00:11:42.148 Suite: bdevio tests on: nvme1n2 00:11:42.148 Test: blockdev write read block ...passed 00:11:42.148 Test: blockdev write zeroes read block ...passed 00:11:42.148 Test: blockdev write zeroes read no split ...passed 00:11:42.148 Test: blockdev write zeroes read split ...passed 00:11:42.148 Test: blockdev write zeroes read split partial ...passed 00:11:42.148 Test: blockdev reset ...passed 00:11:42.148 Test: blockdev write read 8 blocks ...passed 00:11:42.148 Test: blockdev write read size > 128k ...passed 00:11:42.148 Test: blockdev write read invalid size ...passed 00:11:42.407 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:42.407 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:42.407 Test: blockdev write read max offset ...passed 00:11:42.407 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:42.407 Test: blockdev writev readv 8 blocks ...passed 00:11:42.407 Test: blockdev writev readv 30 x 1block ...passed 00:11:42.407 Test: blockdev writev readv block ...passed 00:11:42.407 Test: blockdev writev readv size > 128k ...passed 00:11:42.407 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:42.407 Test: blockdev comparev and writev ...passed 00:11:42.407 Test: blockdev nvme passthru rw ...passed 00:11:42.407 Test: blockdev nvme passthru vendor specific ...passed 00:11:42.407 Test: blockdev nvme admin passthru ...passed 00:11:42.407 Test: blockdev copy ...passed 00:11:42.407 Suite: bdevio tests on: nvme1n1 00:11:42.407 Test: blockdev write read block ...passed 00:11:42.407 Test: blockdev write zeroes read block ...passed 00:11:42.407 Test: blockdev write zeroes read no split ...passed 00:11:42.407 Test: blockdev write zeroes read split ...passed 00:11:42.407 Test: blockdev write zeroes read split partial ...passed 00:11:42.407 Test: blockdev reset ...passed 00:11:42.407 Test: blockdev write read 8 blocks ...passed 00:11:42.407 Test: blockdev write read size > 128k ...passed 00:11:42.407 Test: blockdev write read invalid size ...passed 00:11:42.407 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:42.407 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:42.407 Test: blockdev write read max offset ...passed 00:11:42.407 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:42.407 Test: blockdev writev readv 8 blocks ...passed 00:11:42.407 Test: blockdev writev readv 30 x 1block ...passed 00:11:42.407 Test: blockdev writev readv block ...passed 00:11:42.407 Test: blockdev writev readv size > 128k ...passed 00:11:42.407 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:42.407 Test: blockdev comparev and writev ...passed 00:11:42.407 Test: blockdev nvme passthru rw ...passed 00:11:42.407 Test: blockdev nvme passthru vendor specific ...passed 00:11:42.407 Test: blockdev nvme admin passthru ...passed 00:11:42.407 Test: blockdev copy ...passed 00:11:42.407 Suite: bdevio tests on: nvme0n1 00:11:42.407 Test: blockdev write read block ...passed 00:11:42.407 Test: blockdev write zeroes read block ...passed 00:11:42.407 Test: blockdev write zeroes read no split ...passed 00:11:42.407 Test: blockdev write zeroes read split ...passed 00:11:42.407 Test: blockdev write zeroes read split partial ...passed 00:11:42.407 Test: blockdev reset ...passed 00:11:42.407 Test: blockdev write read 8 blocks ...passed 00:11:42.407 Test: blockdev write read size > 128k ...passed 00:11:42.407 Test: blockdev write read invalid size ...passed 00:11:42.407 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:42.407 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:42.407 Test: blockdev write read max offset ...passed 00:11:42.407 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:42.407 Test: blockdev writev readv 8 blocks ...passed 00:11:42.407 Test: blockdev writev readv 30 x 1block ...passed 00:11:42.407 Test: blockdev writev readv block ...passed 00:11:42.407 Test: blockdev writev readv size > 128k ...passed 00:11:42.407 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:42.407 Test: blockdev comparev and writev ...passed 00:11:42.407 Test: blockdev nvme passthru rw ...passed 00:11:42.407 Test: blockdev nvme passthru vendor specific ...passed 00:11:42.407 Test: blockdev nvme admin passthru ...passed 00:11:42.407 Test: blockdev copy ...passed 00:11:42.407 00:11:42.407 Run Summary: Type Total Ran Passed Failed Inactive 00:11:42.407 suites 6 6 n/a 0 0 00:11:42.407 tests 138 138 138 0 0 00:11:42.407 asserts 780 780 780 0 n/a 00:11:42.407 00:11:42.407 Elapsed time = 0.416 seconds 00:11:42.407 0 00:11:42.407 06:36:52 -- bdev/blockdev.sh@293 -- # killprocess 78966 00:11:42.407 06:36:52 -- common/autotest_common.sh@936 -- # '[' -z 78966 ']' 00:11:42.407 06:36:52 -- common/autotest_common.sh@940 -- # kill -0 78966 00:11:42.407 06:36:52 -- common/autotest_common.sh@941 -- # uname 00:11:42.407 06:36:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:42.407 06:36:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78966 00:11:42.407 06:36:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:42.407 06:36:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:42.407 06:36:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78966' 00:11:42.407 killing process with pid 78966 00:11:42.407 06:36:53 -- common/autotest_common.sh@955 -- # kill 78966 00:11:42.407 06:36:53 -- common/autotest_common.sh@960 -- # wait 78966 00:11:42.407 06:36:53 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:11:42.407 00:11:42.407 real 0m1.293s 00:11:42.407 user 0m3.140s 00:11:42.407 sys 0m0.262s 00:11:42.407 06:36:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:42.407 06:36:53 -- common/autotest_common.sh@10 -- # set +x 00:11:42.407 ************************************ 00:11:42.407 END TEST bdev_bounds 00:11:42.407 ************************************ 00:11:42.666 06:36:53 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:11:42.666 06:36:53 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:11:42.666 06:36:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:42.666 06:36:53 -- common/autotest_common.sh@10 -- # set +x 00:11:42.666 ************************************ 00:11:42.666 START TEST bdev_nbd 00:11:42.666 ************************************ 00:11:42.666 06:36:53 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:11:42.666 06:36:53 -- bdev/blockdev.sh@298 -- # uname -s 00:11:42.666 06:36:53 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:11:42.666 06:36:53 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:42.666 06:36:53 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:42.666 06:36:53 -- bdev/blockdev.sh@302 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:42.666 06:36:53 -- bdev/blockdev.sh@302 -- # local bdev_all 00:11:42.666 06:36:53 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:11:42.666 06:36:53 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:11:42.666 06:36:53 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:42.666 06:36:53 -- bdev/blockdev.sh@309 -- # local nbd_all 00:11:42.666 06:36:53 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:11:42.666 06:36:53 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:42.666 06:36:53 -- bdev/blockdev.sh@312 -- # local nbd_list 00:11:42.666 06:36:53 -- bdev/blockdev.sh@313 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:42.666 06:36:53 -- bdev/blockdev.sh@313 -- # local bdev_list 00:11:42.666 06:36:53 -- bdev/blockdev.sh@316 -- # nbd_pid=79016 00:11:42.666 06:36:53 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:42.666 06:36:53 -- bdev/blockdev.sh@318 -- # waitforlisten 79016 /var/tmp/spdk-nbd.sock 00:11:42.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:42.666 06:36:53 -- common/autotest_common.sh@829 -- # '[' -z 79016 ']' 00:11:42.666 06:36:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:42.666 06:36:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:42.666 06:36:53 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:42.666 06:36:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:42.666 06:36:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:42.666 06:36:53 -- common/autotest_common.sh@10 -- # set +x 00:11:42.666 [2024-11-28 06:36:53.297098] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:42.666 [2024-11-28 06:36:53.297337] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.666 [2024-11-28 06:36:53.433661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.923 [2024-11-28 06:36:53.466952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.524 06:36:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:43.524 06:36:54 -- common/autotest_common.sh@862 -- # return 0 00:11:43.524 06:36:54 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:11:43.524 06:36:54 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:43.524 06:36:54 -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:43.524 06:36:54 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:43.524 06:36:54 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:11:43.524 06:36:54 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:43.524 06:36:54 -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:43.524 06:36:54 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:43.524 06:36:54 -- bdev/nbd_common.sh@24 -- # local i 00:11:43.524 06:36:54 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:43.524 06:36:54 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:43.524 06:36:54 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:43.524 06:36:54 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:11:43.782 06:36:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:43.782 06:36:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:43.782 06:36:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:43.782 06:36:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:11:43.782 06:36:54 -- common/autotest_common.sh@867 -- # local i 00:11:43.782 06:36:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:43.782 06:36:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:43.782 06:36:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:11:43.782 06:36:54 -- common/autotest_common.sh@871 -- # break 00:11:43.782 06:36:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:43.782 06:36:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:43.782 06:36:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:43.782 1+0 records in 00:11:43.782 1+0 records out 00:11:43.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000788074 s, 5.2 MB/s 00:11:43.783 06:36:54 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.783 06:36:54 -- common/autotest_common.sh@884 -- # size=4096 00:11:43.783 06:36:54 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.783 06:36:54 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:43.783 06:36:54 -- common/autotest_common.sh@887 -- # return 0 00:11:43.783 06:36:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:43.783 06:36:54 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:43.783 06:36:54 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:11:43.783 06:36:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:43.783 06:36:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:43.783 06:36:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:43.783 06:36:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:11:43.783 06:36:54 -- common/autotest_common.sh@867 -- # local i 00:11:43.783 06:36:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:43.783 06:36:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:43.783 06:36:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:11:43.783 06:36:54 -- common/autotest_common.sh@871 -- # break 00:11:43.783 06:36:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:43.783 06:36:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:43.783 06:36:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:43.783 1+0 records in 00:11:43.783 1+0 records out 00:11:43.783 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000558045 s, 7.3 MB/s 00:11:43.783 06:36:54 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:44.041 06:36:54 -- common/autotest_common.sh@884 -- # size=4096 00:11:44.041 06:36:54 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:44.041 06:36:54 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:44.041 06:36:54 -- common/autotest_common.sh@887 -- # return 0 00:11:44.041 06:36:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:44.041 06:36:54 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:44.041 06:36:54 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 00:11:44.041 06:36:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:44.041 06:36:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:44.041 06:36:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:44.041 06:36:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:11:44.041 06:36:54 -- common/autotest_common.sh@867 -- # local i 00:11:44.041 06:36:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:44.041 06:36:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:44.041 06:36:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:11:44.041 06:36:54 -- common/autotest_common.sh@871 -- # break 00:11:44.041 06:36:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:44.041 06:36:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:44.041 06:36:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:44.041 1+0 records in 00:11:44.041 1+0 records out 00:11:44.041 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000716963 s, 5.7 MB/s 00:11:44.041 06:36:54 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:44.041 06:36:54 -- common/autotest_common.sh@884 -- # size=4096 00:11:44.041 06:36:54 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:44.041 06:36:54 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:44.041 06:36:54 -- common/autotest_common.sh@887 -- # return 0 00:11:44.041 06:36:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:44.041 06:36:54 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:44.041 06:36:54 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 00:11:44.300 06:36:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:44.300 06:36:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:44.300 06:36:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:44.300 06:36:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:11:44.300 06:36:54 -- common/autotest_common.sh@867 -- # local i 00:11:44.300 06:36:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:44.300 06:36:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:44.300 06:36:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:11:44.300 06:36:54 -- common/autotest_common.sh@871 -- # break 00:11:44.300 06:36:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:44.300 06:36:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:44.300 06:36:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:44.300 1+0 records in 00:11:44.300 1+0 records out 00:11:44.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000844263 s, 4.9 MB/s 00:11:44.300 06:36:54 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:44.300 06:36:54 -- common/autotest_common.sh@884 -- # size=4096 00:11:44.300 06:36:54 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:44.300 06:36:54 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:44.300 06:36:54 -- common/autotest_common.sh@887 -- # return 0 00:11:44.300 06:36:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:44.300 06:36:54 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:44.300 06:36:54 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:11:44.559 06:36:55 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:44.559 06:36:55 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:44.559 06:36:55 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:44.559 06:36:55 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:11:44.559 06:36:55 -- common/autotest_common.sh@867 -- # local i 00:11:44.559 06:36:55 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:44.559 06:36:55 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:44.559 06:36:55 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:11:44.559 06:36:55 -- common/autotest_common.sh@871 -- # break 00:11:44.559 06:36:55 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:44.559 06:36:55 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:44.559 06:36:55 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:44.559 1+0 records in 00:11:44.559 1+0 records out 00:11:44.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000687778 s, 6.0 MB/s 00:11:44.559 06:36:55 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:44.559 06:36:55 -- common/autotest_common.sh@884 -- # size=4096 00:11:44.559 06:36:55 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:44.559 06:36:55 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:44.559 06:36:55 -- common/autotest_common.sh@887 -- # return 0 00:11:44.559 06:36:55 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:44.559 06:36:55 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:44.559 06:36:55 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:11:44.819 06:36:55 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:44.819 06:36:55 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:44.819 06:36:55 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:44.819 06:36:55 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:11:44.819 06:36:55 -- common/autotest_common.sh@867 -- # local i 00:11:44.819 06:36:55 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:44.819 06:36:55 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:44.819 06:36:55 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:11:44.819 06:36:55 -- common/autotest_common.sh@871 -- # break 00:11:44.819 06:36:55 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:44.819 06:36:55 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:44.819 06:36:55 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:44.819 1+0 records in 00:11:44.819 1+0 records out 00:11:44.819 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011405 s, 3.6 MB/s 00:11:44.819 06:36:55 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:44.819 06:36:55 -- common/autotest_common.sh@884 -- # size=4096 00:11:44.819 06:36:55 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:44.819 06:36:55 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:44.819 06:36:55 -- common/autotest_common.sh@887 -- # return 0 00:11:44.819 06:36:55 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:44.819 06:36:55 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:44.819 06:36:55 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:44.819 06:36:55 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:44.819 { 00:11:44.819 "nbd_device": "/dev/nbd0", 00:11:44.819 "bdev_name": "nvme0n1" 00:11:44.819 }, 00:11:44.819 { 00:11:44.819 "nbd_device": "/dev/nbd1", 00:11:44.819 "bdev_name": "nvme1n1" 00:11:44.819 }, 00:11:44.819 { 00:11:44.819 "nbd_device": "/dev/nbd2", 00:11:44.819 "bdev_name": "nvme1n2" 00:11:44.819 }, 00:11:44.819 { 00:11:44.819 "nbd_device": "/dev/nbd3", 00:11:44.819 "bdev_name": "nvme1n3" 00:11:44.819 }, 00:11:44.819 { 00:11:44.819 "nbd_device": "/dev/nbd4", 00:11:44.819 "bdev_name": "nvme2n1" 00:11:44.819 }, 00:11:44.819 { 00:11:44.819 "nbd_device": "/dev/nbd5", 00:11:44.819 "bdev_name": "nvme3n1" 00:11:44.819 } 00:11:44.819 ]' 00:11:44.819 06:36:55 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:44.819 06:36:55 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:44.819 06:36:55 -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:44.819 { 00:11:44.819 "nbd_device": "/dev/nbd0", 00:11:44.819 "bdev_name": "nvme0n1" 00:11:44.819 }, 00:11:44.819 { 00:11:44.819 "nbd_device": "/dev/nbd1", 00:11:44.819 "bdev_name": "nvme1n1" 00:11:44.819 }, 00:11:44.819 { 00:11:44.819 "nbd_device": "/dev/nbd2", 00:11:44.819 "bdev_name": "nvme1n2" 00:11:44.819 }, 00:11:44.819 { 00:11:44.819 "nbd_device": "/dev/nbd3", 00:11:44.819 "bdev_name": "nvme1n3" 00:11:44.819 }, 00:11:44.819 { 00:11:44.819 "nbd_device": "/dev/nbd4", 00:11:44.819 "bdev_name": "nvme2n1" 00:11:44.819 }, 00:11:44.819 { 00:11:44.819 "nbd_device": "/dev/nbd5", 00:11:44.819 "bdev_name": "nvme3n1" 00:11:44.819 } 00:11:44.819 ]' 00:11:44.819 06:36:55 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:11:44.819 06:36:55 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:44.819 06:36:55 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:11:44.819 06:36:55 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:44.819 06:36:55 -- bdev/nbd_common.sh@51 -- # local i 00:11:44.819 06:36:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:44.819 06:36:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:45.078 06:36:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:45.078 06:36:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:45.078 06:36:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:45.078 06:36:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:45.078 06:36:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:45.078 06:36:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:45.078 06:36:55 -- bdev/nbd_common.sh@41 -- # break 00:11:45.078 06:36:55 -- bdev/nbd_common.sh@45 -- # return 0 00:11:45.078 06:36:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:45.078 06:36:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:45.337 06:36:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:45.337 06:36:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:45.337 06:36:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:45.337 06:36:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:45.337 06:36:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:45.337 06:36:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:45.337 06:36:55 -- bdev/nbd_common.sh@41 -- # break 00:11:45.337 06:36:55 -- bdev/nbd_common.sh@45 -- # return 0 00:11:45.337 06:36:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:45.337 06:36:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:45.596 06:36:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:45.596 06:36:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:45.596 06:36:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:45.596 06:36:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:45.596 06:36:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:45.596 06:36:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:45.597 06:36:56 -- bdev/nbd_common.sh@41 -- # break 00:11:45.597 06:36:56 -- bdev/nbd_common.sh@45 -- # return 0 00:11:45.597 06:36:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:45.597 06:36:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:45.597 06:36:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:45.597 06:36:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:45.597 06:36:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:45.597 06:36:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:45.597 06:36:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:45.597 06:36:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:45.597 06:36:56 -- bdev/nbd_common.sh@41 -- # break 00:11:45.597 06:36:56 -- bdev/nbd_common.sh@45 -- # return 0 00:11:45.597 06:36:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:45.597 06:36:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:45.856 06:36:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:45.856 06:36:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:45.856 06:36:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:45.856 06:36:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:45.856 06:36:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:45.856 06:36:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:45.856 06:36:56 -- bdev/nbd_common.sh@41 -- # break 00:11:45.856 06:36:56 -- bdev/nbd_common.sh@45 -- # return 0 00:11:45.856 06:36:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:45.856 06:36:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:46.114 06:36:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:46.114 06:36:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:46.114 06:36:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:46.114 06:36:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:46.114 06:36:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:46.114 06:36:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:46.114 06:36:56 -- bdev/nbd_common.sh@41 -- # break 00:11:46.114 06:36:56 -- bdev/nbd_common.sh@45 -- # return 0 00:11:46.114 06:36:56 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:46.114 06:36:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:46.114 06:36:56 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:46.373 06:36:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:46.373 06:36:56 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:46.373 06:36:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:46.373 06:36:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:46.373 06:36:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:46.373 06:36:56 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:46.373 06:36:56 -- bdev/nbd_common.sh@65 -- # true 00:11:46.373 06:36:56 -- bdev/nbd_common.sh@65 -- # count=0 00:11:46.373 06:36:56 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:46.373 06:36:56 -- bdev/nbd_common.sh@122 -- # count=0 00:11:46.373 06:36:56 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:46.373 06:36:56 -- bdev/nbd_common.sh@127 -- # return 0 00:11:46.373 06:36:56 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:46.373 06:36:56 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:46.373 06:36:56 -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:46.373 06:36:56 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:46.373 06:36:56 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:46.373 06:36:56 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:46.373 06:36:56 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:46.373 06:36:56 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:46.373 06:36:56 -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:46.373 06:36:56 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:46.373 06:36:56 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:46.373 06:36:56 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:46.373 06:36:56 -- bdev/nbd_common.sh@12 -- # local i 00:11:46.373 06:36:56 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:46.373 06:36:56 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:46.373 06:36:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:11:46.632 /dev/nbd0 00:11:46.632 06:36:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:46.632 06:36:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:46.632 06:36:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:11:46.632 06:36:57 -- common/autotest_common.sh@867 -- # local i 00:11:46.632 06:36:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:46.632 06:36:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:46.632 06:36:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:11:46.632 06:36:57 -- common/autotest_common.sh@871 -- # break 00:11:46.632 06:36:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:46.632 06:36:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:46.632 06:36:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:46.632 1+0 records in 00:11:46.632 1+0 records out 00:11:46.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588508 s, 7.0 MB/s 00:11:46.632 06:36:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.632 06:36:57 -- common/autotest_common.sh@884 -- # size=4096 00:11:46.632 06:36:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.633 06:36:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:46.633 06:36:57 -- common/autotest_common.sh@887 -- # return 0 00:11:46.633 06:36:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:46.633 06:36:57 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:46.633 06:36:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:11:46.633 /dev/nbd1 00:11:46.633 06:36:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:46.633 06:36:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:46.633 06:36:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:11:46.633 06:36:57 -- common/autotest_common.sh@867 -- # local i 00:11:46.633 06:36:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:46.633 06:36:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:46.633 06:36:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:11:46.633 06:36:57 -- common/autotest_common.sh@871 -- # break 00:11:46.633 06:36:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:46.633 06:36:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:46.633 06:36:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:46.633 1+0 records in 00:11:46.633 1+0 records out 00:11:46.633 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00081 s, 5.1 MB/s 00:11:46.633 06:36:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.633 06:36:57 -- common/autotest_common.sh@884 -- # size=4096 00:11:46.633 06:36:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.633 06:36:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:46.633 06:36:57 -- common/autotest_common.sh@887 -- # return 0 00:11:46.633 06:36:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:46.633 06:36:57 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:46.633 06:36:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 /dev/nbd10 00:11:46.892 /dev/nbd10 00:11:46.892 06:36:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:46.892 06:36:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:46.892 06:36:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:11:46.892 06:36:57 -- common/autotest_common.sh@867 -- # local i 00:11:46.892 06:36:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:46.892 06:36:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:46.892 06:36:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:11:46.892 06:36:57 -- common/autotest_common.sh@871 -- # break 00:11:46.892 06:36:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:46.892 06:36:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:46.892 06:36:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:46.892 1+0 records in 00:11:46.892 1+0 records out 00:11:46.892 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00086375 s, 4.7 MB/s 00:11:46.892 06:36:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.892 06:36:57 -- common/autotest_common.sh@884 -- # size=4096 00:11:46.892 06:36:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.892 06:36:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:46.892 06:36:57 -- common/autotest_common.sh@887 -- # return 0 00:11:46.892 06:36:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:46.892 06:36:57 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:46.892 06:36:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 /dev/nbd11 00:11:47.152 /dev/nbd11 00:11:47.152 06:36:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:47.152 06:36:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:47.152 06:36:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:11:47.152 06:36:57 -- common/autotest_common.sh@867 -- # local i 00:11:47.152 06:36:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:47.152 06:36:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:47.152 06:36:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:11:47.152 06:36:57 -- common/autotest_common.sh@871 -- # break 00:11:47.152 06:36:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:47.152 06:36:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:47.152 06:36:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:47.152 1+0 records in 00:11:47.152 1+0 records out 00:11:47.152 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000882861 s, 4.6 MB/s 00:11:47.152 06:36:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.152 06:36:57 -- common/autotest_common.sh@884 -- # size=4096 00:11:47.152 06:36:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.152 06:36:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:47.152 06:36:57 -- common/autotest_common.sh@887 -- # return 0 00:11:47.152 06:36:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:47.152 06:36:57 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:47.152 06:36:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:11:47.411 /dev/nbd12 00:11:47.411 06:36:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:47.411 06:36:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:47.411 06:36:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:11:47.411 06:36:58 -- common/autotest_common.sh@867 -- # local i 00:11:47.411 06:36:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:47.411 06:36:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:47.411 06:36:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:11:47.411 06:36:58 -- common/autotest_common.sh@871 -- # break 00:11:47.411 06:36:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:47.411 06:36:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:47.411 06:36:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:47.411 1+0 records in 00:11:47.411 1+0 records out 00:11:47.411 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109631 s, 3.7 MB/s 00:11:47.411 06:36:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.411 06:36:58 -- common/autotest_common.sh@884 -- # size=4096 00:11:47.411 06:36:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.411 06:36:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:47.411 06:36:58 -- common/autotest_common.sh@887 -- # return 0 00:11:47.411 06:36:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:47.412 06:36:58 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:47.412 06:36:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:11:47.671 /dev/nbd13 00:11:47.671 06:36:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:47.671 06:36:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:47.671 06:36:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:11:47.671 06:36:58 -- common/autotest_common.sh@867 -- # local i 00:11:47.671 06:36:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:47.671 06:36:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:47.671 06:36:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:11:47.671 06:36:58 -- common/autotest_common.sh@871 -- # break 00:11:47.671 06:36:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:47.671 06:36:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:47.671 06:36:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:47.671 1+0 records in 00:11:47.671 1+0 records out 00:11:47.671 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000944889 s, 4.3 MB/s 00:11:47.671 06:36:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.671 06:36:58 -- common/autotest_common.sh@884 -- # size=4096 00:11:47.671 06:36:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.671 06:36:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:47.671 06:36:58 -- common/autotest_common.sh@887 -- # return 0 00:11:47.671 06:36:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:47.671 06:36:58 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:47.671 06:36:58 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:47.671 06:36:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:47.671 06:36:58 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:47.930 06:36:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:47.930 { 00:11:47.930 "nbd_device": "/dev/nbd0", 00:11:47.930 "bdev_name": "nvme0n1" 00:11:47.930 }, 00:11:47.930 { 00:11:47.930 "nbd_device": "/dev/nbd1", 00:11:47.930 "bdev_name": "nvme1n1" 00:11:47.930 }, 00:11:47.930 { 00:11:47.930 "nbd_device": "/dev/nbd10", 00:11:47.930 "bdev_name": "nvme1n2" 00:11:47.930 }, 00:11:47.930 { 00:11:47.930 "nbd_device": "/dev/nbd11", 00:11:47.930 "bdev_name": "nvme1n3" 00:11:47.930 }, 00:11:47.930 { 00:11:47.930 "nbd_device": "/dev/nbd12", 00:11:47.930 "bdev_name": "nvme2n1" 00:11:47.930 }, 00:11:47.930 { 00:11:47.930 "nbd_device": "/dev/nbd13", 00:11:47.930 "bdev_name": "nvme3n1" 00:11:47.930 } 00:11:47.930 ]' 00:11:47.930 06:36:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:47.930 06:36:58 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:47.930 { 00:11:47.930 "nbd_device": "/dev/nbd0", 00:11:47.930 "bdev_name": "nvme0n1" 00:11:47.930 }, 00:11:47.930 { 00:11:47.930 "nbd_device": "/dev/nbd1", 00:11:47.930 "bdev_name": "nvme1n1" 00:11:47.930 }, 00:11:47.930 { 00:11:47.930 "nbd_device": "/dev/nbd10", 00:11:47.930 "bdev_name": "nvme1n2" 00:11:47.930 }, 00:11:47.930 { 00:11:47.930 "nbd_device": "/dev/nbd11", 00:11:47.930 "bdev_name": "nvme1n3" 00:11:47.930 }, 00:11:47.930 { 00:11:47.930 "nbd_device": "/dev/nbd12", 00:11:47.930 "bdev_name": "nvme2n1" 00:11:47.930 }, 00:11:47.930 { 00:11:47.930 "nbd_device": "/dev/nbd13", 00:11:47.930 "bdev_name": "nvme3n1" 00:11:47.930 } 00:11:47.930 ]' 00:11:47.930 06:36:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:47.930 /dev/nbd1 00:11:47.930 /dev/nbd10 00:11:47.930 /dev/nbd11 00:11:47.930 /dev/nbd12 00:11:47.930 /dev/nbd13' 00:11:47.930 06:36:58 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:47.930 /dev/nbd1 00:11:47.930 /dev/nbd10 00:11:47.930 /dev/nbd11 00:11:47.930 /dev/nbd12 00:11:47.930 /dev/nbd13' 00:11:47.930 06:36:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:47.930 06:36:58 -- bdev/nbd_common.sh@65 -- # count=6 00:11:47.930 06:36:58 -- bdev/nbd_common.sh@66 -- # echo 6 00:11:47.930 06:36:58 -- bdev/nbd_common.sh@95 -- # count=6 00:11:47.930 06:36:58 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:11:47.930 06:36:58 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:11:47.930 06:36:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:47.930 06:36:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:47.930 06:36:58 -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:47.930 06:36:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:47.930 06:36:58 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:47.930 06:36:58 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:47.930 256+0 records in 00:11:47.930 256+0 records out 00:11:47.930 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00834327 s, 126 MB/s 00:11:47.930 06:36:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:47.930 06:36:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:47.930 256+0 records in 00:11:47.930 256+0 records out 00:11:47.930 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.174601 s, 6.0 MB/s 00:11:47.930 06:36:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:47.930 06:36:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:48.189 256+0 records in 00:11:48.189 256+0 records out 00:11:48.189 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.176427 s, 5.9 MB/s 00:11:48.189 06:36:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:48.189 06:36:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:48.448 256+0 records in 00:11:48.448 256+0 records out 00:11:48.448 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.179066 s, 5.9 MB/s 00:11:48.448 06:36:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:48.448 06:36:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:48.707 256+0 records in 00:11:48.707 256+0 records out 00:11:48.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.211977 s, 4.9 MB/s 00:11:48.707 06:36:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:48.707 06:36:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:48.966 256+0 records in 00:11:48.966 256+0 records out 00:11:48.966 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.276536 s, 3.8 MB/s 00:11:48.966 06:36:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:48.966 06:36:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:48.966 256+0 records in 00:11:48.966 256+0 records out 00:11:48.966 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0805441 s, 13.0 MB/s 00:11:48.966 06:36:59 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:11:48.966 06:36:59 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:48.966 06:36:59 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:48.966 06:36:59 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:48.966 06:36:59 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:48.966 06:36:59 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:48.966 06:36:59 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:48.966 06:36:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:48.966 06:36:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:48.966 06:36:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:48.966 06:36:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:48.966 06:36:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:48.966 06:36:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:48.966 06:36:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:48.966 06:36:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:48.966 06:36:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:48.966 06:36:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:48.966 06:36:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:48.966 06:36:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:48.966 06:36:59 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:48.966 06:36:59 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:48.966 06:36:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:48.966 06:36:59 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:48.966 06:36:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:48.966 06:36:59 -- bdev/nbd_common.sh@51 -- # local i 00:11:48.966 06:36:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:48.966 06:36:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:49.225 06:36:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:49.225 06:36:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:49.225 06:36:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:49.225 06:36:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:49.225 06:36:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:49.225 06:36:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:49.225 06:36:59 -- bdev/nbd_common.sh@41 -- # break 00:11:49.225 06:36:59 -- bdev/nbd_common.sh@45 -- # return 0 00:11:49.225 06:36:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:49.225 06:36:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:49.484 06:37:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:49.484 06:37:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:49.484 06:37:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:49.484 06:37:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:49.484 06:37:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:49.484 06:37:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:49.484 06:37:00 -- bdev/nbd_common.sh@41 -- # break 00:11:49.484 06:37:00 -- bdev/nbd_common.sh@45 -- # return 0 00:11:49.484 06:37:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:49.484 06:37:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:49.742 06:37:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:49.742 06:37:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:49.742 06:37:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:49.742 06:37:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:49.742 06:37:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:49.742 06:37:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:49.742 06:37:00 -- bdev/nbd_common.sh@41 -- # break 00:11:49.742 06:37:00 -- bdev/nbd_common.sh@45 -- # return 0 00:11:49.742 06:37:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:49.742 06:37:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:49.742 06:37:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:49.742 06:37:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:49.742 06:37:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:49.742 06:37:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:49.742 06:37:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:49.742 06:37:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:49.742 06:37:00 -- bdev/nbd_common.sh@41 -- # break 00:11:49.742 06:37:00 -- bdev/nbd_common.sh@45 -- # return 0 00:11:49.742 06:37:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:49.742 06:37:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:50.001 06:37:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:50.001 06:37:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:50.001 06:37:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:50.001 06:37:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:50.001 06:37:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:50.001 06:37:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:50.001 06:37:00 -- bdev/nbd_common.sh@41 -- # break 00:11:50.001 06:37:00 -- bdev/nbd_common.sh@45 -- # return 0 00:11:50.001 06:37:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:50.001 06:37:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:50.260 06:37:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:50.260 06:37:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:50.260 06:37:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:50.260 06:37:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:50.260 06:37:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:50.260 06:37:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:50.260 06:37:00 -- bdev/nbd_common.sh@41 -- # break 00:11:50.260 06:37:00 -- bdev/nbd_common.sh@45 -- # return 0 00:11:50.260 06:37:00 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:50.260 06:37:00 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:50.260 06:37:00 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:50.519 06:37:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:50.519 06:37:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:50.519 06:37:01 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:50.519 06:37:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:50.519 06:37:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:50.519 06:37:01 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:50.519 06:37:01 -- bdev/nbd_common.sh@65 -- # true 00:11:50.519 06:37:01 -- bdev/nbd_common.sh@65 -- # count=0 00:11:50.519 06:37:01 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:50.519 06:37:01 -- bdev/nbd_common.sh@104 -- # count=0 00:11:50.519 06:37:01 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:50.519 06:37:01 -- bdev/nbd_common.sh@109 -- # return 0 00:11:50.519 06:37:01 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:50.519 06:37:01 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:50.519 06:37:01 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:50.519 06:37:01 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:11:50.519 06:37:01 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:11:50.519 06:37:01 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:50.519 malloc_lvol_verify 00:11:50.519 06:37:01 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:50.777 7a3762a9-04c9-46d3-bec3-aa1d72d8e824 00:11:50.777 06:37:01 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:51.035 91301d12-f07d-4a25-b321-2c298e0ddc1e 00:11:51.035 06:37:01 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:51.293 /dev/nbd0 00:11:51.293 06:37:01 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:11:51.293 mke2fs 1.47.0 (5-Feb-2023) 00:11:51.293 Discarding device blocks: 0/4096 done 00:11:51.293 Creating filesystem with 4096 1k blocks and 1024 inodes 00:11:51.293 00:11:51.293 Allocating group tables: 0/1 done 00:11:51.293 Writing inode tables: 0/1 done 00:11:51.293 Creating journal (1024 blocks): done 00:11:51.293 Writing superblocks and filesystem accounting information: 0/1 done 00:11:51.293 00:11:51.293 06:37:01 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:11:51.293 06:37:01 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:51.293 06:37:01 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:51.293 06:37:01 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:51.293 06:37:01 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:51.293 06:37:01 -- bdev/nbd_common.sh@51 -- # local i 00:11:51.293 06:37:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:51.293 06:37:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:51.293 06:37:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:51.293 06:37:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:51.293 06:37:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:51.293 06:37:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:51.293 06:37:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:51.293 06:37:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:51.551 06:37:02 -- bdev/nbd_common.sh@41 -- # break 00:11:51.551 06:37:02 -- bdev/nbd_common.sh@45 -- # return 0 00:11:51.551 06:37:02 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:11:51.551 06:37:02 -- bdev/nbd_common.sh@147 -- # return 0 00:11:51.551 06:37:02 -- bdev/blockdev.sh@324 -- # killprocess 79016 00:11:51.551 06:37:02 -- common/autotest_common.sh@936 -- # '[' -z 79016 ']' 00:11:51.551 06:37:02 -- common/autotest_common.sh@940 -- # kill -0 79016 00:11:51.551 06:37:02 -- common/autotest_common.sh@941 -- # uname 00:11:51.551 06:37:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:51.551 06:37:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79016 00:11:51.551 killing process with pid 79016 00:11:51.551 06:37:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:51.551 06:37:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:51.551 06:37:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79016' 00:11:51.551 06:37:02 -- common/autotest_common.sh@955 -- # kill 79016 00:11:51.551 06:37:02 -- common/autotest_common.sh@960 -- # wait 79016 00:11:51.551 ************************************ 00:11:51.551 END TEST bdev_nbd 00:11:51.551 ************************************ 00:11:51.551 06:37:02 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:11:51.551 00:11:51.551 real 0m9.010s 00:11:51.551 user 0m12.484s 00:11:51.551 sys 0m3.165s 00:11:51.551 06:37:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:51.551 06:37:02 -- common/autotest_common.sh@10 -- # set +x 00:11:51.551 06:37:02 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:11:51.551 06:37:02 -- bdev/blockdev.sh@762 -- # '[' xnvme = nvme ']' 00:11:51.551 06:37:02 -- bdev/blockdev.sh@762 -- # '[' xnvme = gpt ']' 00:11:51.551 06:37:02 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:11:51.551 06:37:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:51.551 06:37:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:51.551 06:37:02 -- common/autotest_common.sh@10 -- # set +x 00:11:51.551 ************************************ 00:11:51.551 START TEST bdev_fio 00:11:51.551 ************************************ 00:11:51.551 06:37:02 -- common/autotest_common.sh@1114 -- # fio_test_suite '' 00:11:51.551 06:37:02 -- bdev/blockdev.sh@329 -- # local env_context 00:11:51.551 06:37:02 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:11:51.551 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:11:51.551 06:37:02 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:11:51.551 06:37:02 -- bdev/blockdev.sh@337 -- # echo '' 00:11:51.551 06:37:02 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:11:51.551 06:37:02 -- bdev/blockdev.sh@337 -- # env_context= 00:11:51.551 06:37:02 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:11:51.551 06:37:02 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:51.551 06:37:02 -- common/autotest_common.sh@1270 -- # local workload=verify 00:11:51.551 06:37:02 -- common/autotest_common.sh@1271 -- # local bdev_type=AIO 00:11:51.551 06:37:02 -- common/autotest_common.sh@1272 -- # local env_context= 00:11:51.551 06:37:02 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:11:51.551 06:37:02 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:11:51.551 06:37:02 -- common/autotest_common.sh@1280 -- # '[' -z verify ']' 00:11:51.551 06:37:02 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:11:51.551 06:37:02 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:51.551 06:37:02 -- common/autotest_common.sh@1290 -- # cat 00:11:51.551 06:37:02 -- common/autotest_common.sh@1302 -- # '[' verify == verify ']' 00:11:51.551 06:37:02 -- common/autotest_common.sh@1303 -- # cat 00:11:51.551 06:37:02 -- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']' 00:11:51.551 06:37:02 -- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version 00:11:51.810 06:37:02 -- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:11:51.810 06:37:02 -- common/autotest_common.sh@1314 -- # echo serialize_overlap=1 00:11:51.810 06:37:02 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:51.810 06:37:02 -- bdev/blockdev.sh@340 -- # echo '[job_nvme0n1]' 00:11:51.810 06:37:02 -- bdev/blockdev.sh@341 -- # echo filename=nvme0n1 00:11:51.810 06:37:02 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:51.810 06:37:02 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n1]' 00:11:51.810 06:37:02 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n1 00:11:51.810 06:37:02 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:51.810 06:37:02 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n2]' 00:11:51.810 06:37:02 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n2 00:11:51.810 06:37:02 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:51.810 06:37:02 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n3]' 00:11:51.810 06:37:02 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n3 00:11:51.810 06:37:02 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:51.810 06:37:02 -- bdev/blockdev.sh@340 -- # echo '[job_nvme2n1]' 00:11:51.810 06:37:02 -- bdev/blockdev.sh@341 -- # echo filename=nvme2n1 00:11:51.810 06:37:02 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:51.810 06:37:02 -- bdev/blockdev.sh@340 -- # echo '[job_nvme3n1]' 00:11:51.810 06:37:02 -- bdev/blockdev.sh@341 -- # echo filename=nvme3n1 00:11:51.810 06:37:02 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:11:51.810 06:37:02 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:51.810 06:37:02 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:11:51.810 06:37:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:51.810 06:37:02 -- common/autotest_common.sh@10 -- # set +x 00:11:51.810 ************************************ 00:11:51.810 START TEST bdev_fio_rw_verify 00:11:51.810 ************************************ 00:11:51.810 06:37:02 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:51.810 06:37:02 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:51.810 06:37:02 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:11:51.810 06:37:02 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:51.810 06:37:02 -- common/autotest_common.sh@1328 -- # local sanitizers 00:11:51.810 06:37:02 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:51.810 06:37:02 -- common/autotest_common.sh@1330 -- # shift 00:11:51.810 06:37:02 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:11:51.810 06:37:02 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:11:51.810 06:37:02 -- common/autotest_common.sh@1334 -- # grep libasan 00:11:51.810 06:37:02 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:11:51.810 06:37:02 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:51.810 06:37:02 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:51.810 06:37:02 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:51.810 06:37:02 -- common/autotest_common.sh@1336 -- # break 00:11:51.810 06:37:02 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:11:51.810 06:37:02 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:51.810 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:51.810 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:51.810 job_nvme1n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:51.810 job_nvme1n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:51.810 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:51.810 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:51.810 fio-3.35 00:11:51.810 Starting 6 threads 00:12:04.020 00:12:04.020 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=79401: Thu Nov 28 06:37:13 2024 00:12:04.020 read: IOPS=45.5k, BW=178MiB/s (186MB/s)(1779MiB/10001msec) 00:12:04.020 slat (usec): min=2, max=403, avg= 4.69, stdev= 2.99 00:12:04.020 clat (usec): min=78, max=10534, avg=393.58, stdev=217.87 00:12:04.020 lat (usec): min=81, max=10539, avg=398.27, stdev=218.24 00:12:04.020 clat percentiles (usec): 00:12:04.020 | 50.000th=[ 363], 99.000th=[ 1037], 99.900th=[ 2278], 99.990th=[ 3752], 00:12:04.020 | 99.999th=[ 5866] 00:12:04.020 write: IOPS=45.9k, BW=179MiB/s (188MB/s)(1794MiB/10001msec); 0 zone resets 00:12:04.020 slat (usec): min=3, max=1925, avg=18.89, stdev=25.41 00:12:04.020 clat (usec): min=70, max=7658, avg=462.10, stdev=248.01 00:12:04.020 lat (usec): min=84, max=7710, avg=480.99, stdev=250.91 00:12:04.020 clat percentiles (usec): 00:12:04.020 | 50.000th=[ 424], 99.000th=[ 1287], 99.900th=[ 2769], 99.990th=[ 3818], 00:12:04.020 | 99.999th=[ 7504] 00:12:04.020 bw ( KiB/s): min=110034, max=205480, per=100.00%, avg=184151.42, stdev=3334.92, samples=114 00:12:04.020 iops : min=27508, max=51370, avg=46037.63, stdev=833.73, samples=114 00:12:04.020 lat (usec) : 100=0.07%, 250=18.64%, 500=52.56%, 750=22.45%, 1000=4.56% 00:12:04.020 lat (msec) : 2=1.47%, 4=0.25%, 10=0.01%, 20=0.01% 00:12:04.020 cpu : usr=59.44%, sys=25.22%, ctx=10888, majf=0, minf=37898 00:12:04.020 IO depths : 1=12.4%, 2=24.9%, 4=50.1%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:04.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.020 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.020 issued rwts: total=455311,459251,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:04.020 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:04.020 00:12:04.020 Run status group 0 (all jobs): 00:12:04.020 READ: bw=178MiB/s (186MB/s), 178MiB/s-178MiB/s (186MB/s-186MB/s), io=1779MiB (1865MB), run=10001-10001msec 00:12:04.020 WRITE: bw=179MiB/s (188MB/s), 179MiB/s-179MiB/s (188MB/s-188MB/s), io=1794MiB (1881MB), run=10001-10001msec 00:12:04.020 ----------------------------------------------------- 00:12:04.020 Suppressions used: 00:12:04.020 count bytes template 00:12:04.020 6 48 /usr/src/fio/parse.c 00:12:04.020 3612 346752 /usr/src/fio/iolog.c 00:12:04.020 1 8 libtcmalloc_minimal.so 00:12:04.020 1 904 libcrypto.so 00:12:04.020 ----------------------------------------------------- 00:12:04.020 00:12:04.020 00:12:04.020 real 0m11.018s 00:12:04.020 user 0m36.367s 00:12:04.020 sys 0m15.409s 00:12:04.020 06:37:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:04.020 ************************************ 00:12:04.020 END TEST bdev_fio_rw_verify 00:12:04.020 ************************************ 00:12:04.020 06:37:13 -- common/autotest_common.sh@10 -- # set +x 00:12:04.020 06:37:13 -- bdev/blockdev.sh@348 -- # rm -f 00:12:04.020 06:37:13 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:04.020 06:37:13 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:04.020 06:37:13 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:04.020 06:37:13 -- common/autotest_common.sh@1270 -- # local workload=trim 00:12:04.020 06:37:13 -- common/autotest_common.sh@1271 -- # local bdev_type= 00:12:04.020 06:37:13 -- common/autotest_common.sh@1272 -- # local env_context= 00:12:04.020 06:37:13 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:12:04.020 06:37:13 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:04.020 06:37:13 -- common/autotest_common.sh@1280 -- # '[' -z trim ']' 00:12:04.020 06:37:13 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:12:04.020 06:37:13 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:04.020 06:37:13 -- common/autotest_common.sh@1290 -- # cat 00:12:04.020 06:37:13 -- common/autotest_common.sh@1302 -- # '[' trim == verify ']' 00:12:04.020 06:37:13 -- common/autotest_common.sh@1317 -- # '[' trim == trim ']' 00:12:04.020 06:37:13 -- common/autotest_common.sh@1318 -- # echo rw=trimwrite 00:12:04.020 06:37:13 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:04.020 06:37:13 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "c3f6356d-2f91-49f5-a65a-8fa5ba8ba9c0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "c3f6356d-2f91-49f5-a65a-8fa5ba8ba9c0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "48a93659-a57d-4c55-b4ed-24578b534d6b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "48a93659-a57d-4c55-b4ed-24578b534d6b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "dce9fbf9-e649-4906-8e10-39d7baddfb6c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dce9fbf9-e649-4906-8e10-39d7baddfb6c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "d161159e-3ebd-4bb0-8b8b-3f57b2876c28"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d161159e-3ebd-4bb0-8b8b-3f57b2876c28",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "d64909cf-5f76-4dba-945b-da0ce02edd4c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d64909cf-5f76-4dba-945b-da0ce02edd4c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "a28b19a6-8dd6-4791-818c-2ea894119f16"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a28b19a6-8dd6-4791-818c-2ea894119f16",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:12:04.020 06:37:13 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:12:04.020 06:37:13 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:04.020 06:37:13 -- bdev/blockdev.sh@360 -- # popd 00:12:04.020 /home/vagrant/spdk_repo/spdk 00:12:04.020 06:37:13 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:12:04.020 06:37:13 -- bdev/blockdev.sh@362 -- # return 0 00:12:04.020 00:12:04.020 real 0m11.158s 00:12:04.020 user 0m36.425s 00:12:04.020 sys 0m15.487s 00:12:04.020 06:37:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:04.020 06:37:13 -- common/autotest_common.sh@10 -- # set +x 00:12:04.020 ************************************ 00:12:04.020 END TEST bdev_fio 00:12:04.020 ************************************ 00:12:04.020 06:37:13 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:04.020 06:37:13 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:04.020 06:37:13 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:12:04.020 06:37:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:04.020 06:37:13 -- common/autotest_common.sh@10 -- # set +x 00:12:04.020 ************************************ 00:12:04.020 START TEST bdev_verify 00:12:04.020 ************************************ 00:12:04.020 06:37:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:04.020 [2024-11-28 06:37:13.548545] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:04.020 [2024-11-28 06:37:13.548646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79567 ] 00:12:04.020 [2024-11-28 06:37:13.684274] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:04.020 [2024-11-28 06:37:13.714291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.020 [2024-11-28 06:37:13.714323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.020 Running I/O for 5 seconds... 00:12:09.293 00:12:09.293 Latency(us) 00:12:09.293 [2024-11-28T06:37:20.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:09.293 [2024-11-28T06:37:20.063Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:09.293 Verification LBA range: start 0x0 length 0x20000 00:12:09.293 nvme0n1 : 5.05 3324.47 12.99 0.00 0.00 38358.51 14417.92 51622.20 00:12:09.293 [2024-11-28T06:37:20.063Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:09.293 Verification LBA range: start 0x20000 length 0x20000 00:12:09.293 nvme0n1 : 5.06 3344.52 13.06 0.00 0.00 38120.09 15325.34 47992.52 00:12:09.293 [2024-11-28T06:37:20.064Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:09.294 Verification LBA range: start 0x0 length 0x80000 00:12:09.294 nvme1n1 : 5.04 3273.84 12.79 0.00 0.00 38948.44 9729.58 58478.28 00:12:09.294 [2024-11-28T06:37:20.064Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:09.294 Verification LBA range: start 0x80000 length 0x80000 00:12:09.294 nvme1n1 : 5.05 3303.43 12.90 0.00 0.00 38614.94 9023.80 54041.99 00:12:09.294 [2024-11-28T06:37:20.064Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:09.294 Verification LBA range: start 0x0 length 0x80000 00:12:09.294 nvme1n2 : 5.05 3188.99 12.46 0.00 0.00 39902.35 15123.69 58478.28 00:12:09.294 [2024-11-28T06:37:20.064Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:09.294 Verification LBA range: start 0x80000 length 0x80000 00:12:09.294 nvme1n2 : 5.06 3239.47 12.65 0.00 0.00 39298.91 16333.59 50412.31 00:12:09.294 [2024-11-28T06:37:20.064Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:09.294 Verification LBA range: start 0x0 length 0x80000 00:12:09.294 nvme1n3 : 5.04 3225.34 12.60 0.00 0.00 39483.65 2583.63 56461.78 00:12:09.294 [2024-11-28T06:37:20.064Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:09.294 Verification LBA range: start 0x80000 length 0x80000 00:12:09.294 nvme1n3 : 5.05 3263.30 12.75 0.00 0.00 39040.58 3478.45 56461.78 00:12:09.294 [2024-11-28T06:37:20.064Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:09.294 Verification LBA range: start 0x0 length 0xbd0bd 00:12:09.294 nvme2n1 : 5.05 3341.03 13.05 0.00 0.00 38066.78 3806.13 50815.61 00:12:09.294 [2024-11-28T06:37:20.064Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:09.294 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:12:09.294 nvme2n1 : 5.05 3307.01 12.92 0.00 0.00 38466.27 6654.42 54445.29 00:12:09.294 [2024-11-28T06:37:20.064Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:09.294 Verification LBA range: start 0x0 length 0xa0000 00:12:09.294 nvme3n1 : 5.06 3346.89 13.07 0.00 0.00 37950.03 4411.08 57268.38 00:12:09.294 [2024-11-28T06:37:20.064Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:09.294 Verification LBA range: start 0xa0000 length 0xa0000 00:12:09.294 nvme3n1 : 5.06 3324.14 12.98 0.00 0.00 38164.26 3251.59 52428.80 00:12:09.294 [2024-11-28T06:37:20.064Z] =================================================================================================================== 00:12:09.294 [2024-11-28T06:37:20.064Z] Total : 39482.43 154.23 0.00 0.00 38692.03 2583.63 58478.28 00:12:09.294 00:12:09.294 real 0m5.651s 00:12:09.294 user 0m7.267s 00:12:09.294 sys 0m3.076s 00:12:09.294 06:37:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:09.294 06:37:19 -- common/autotest_common.sh@10 -- # set +x 00:12:09.294 ************************************ 00:12:09.294 END TEST bdev_verify 00:12:09.294 ************************************ 00:12:09.294 06:37:19 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:09.294 06:37:19 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:12:09.294 06:37:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:09.294 06:37:19 -- common/autotest_common.sh@10 -- # set +x 00:12:09.294 ************************************ 00:12:09.294 START TEST bdev_verify_big_io 00:12:09.294 ************************************ 00:12:09.294 06:37:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:09.294 [2024-11-28 06:37:19.239955] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:09.294 [2024-11-28 06:37:19.240072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79668 ] 00:12:09.294 [2024-11-28 06:37:19.373779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:09.294 [2024-11-28 06:37:19.404376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.294 [2024-11-28 06:37:19.404415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.294 Running I/O for 5 seconds... 00:12:14.573 00:12:14.573 Latency(us) 00:12:14.573 [2024-11-28T06:37:25.343Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:14.573 [2024-11-28T06:37:25.343Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:14.573 Verification LBA range: start 0x0 length 0x2000 00:12:14.573 nvme0n1 : 5.53 300.00 18.75 0.00 0.00 413369.13 60494.77 590428.95 00:12:14.573 [2024-11-28T06:37:25.343Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:14.573 Verification LBA range: start 0x2000 length 0x2000 00:12:14.573 nvme0n1 : 5.42 208.25 13.02 0.00 0.00 599471.91 133895.09 564617.85 00:12:14.573 [2024-11-28T06:37:25.343Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:14.573 Verification LBA range: start 0x0 length 0x8000 00:12:14.573 nvme1n1 : 5.53 234.48 14.65 0.00 0.00 514762.44 153253.42 542033.13 00:12:14.573 [2024-11-28T06:37:25.343Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:14.573 Verification LBA range: start 0x8000 length 0x8000 00:12:14.573 nvme1n1 : 5.47 237.22 14.83 0.00 0.00 521330.58 93968.54 583976.17 00:12:14.573 [2024-11-28T06:37:25.343Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:14.573 Verification LBA range: start 0x0 length 0x8000 00:12:14.573 nvme1n2 : 5.53 234.42 14.65 0.00 0.00 507499.03 95178.44 516222.03 00:12:14.573 [2024-11-28T06:37:25.343Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:14.573 Verification LBA range: start 0x8000 length 0x8000 00:12:14.573 nvme1n2 : 5.42 223.85 13.99 0.00 0.00 542385.60 58881.58 638824.76 00:12:14.573 [2024-11-28T06:37:25.343Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:14.573 Verification LBA range: start 0x0 length 0x8000 00:12:14.573 nvme1n3 : 5.55 250.48 15.66 0.00 0.00 475176.48 58881.58 693673.35 00:12:14.573 [2024-11-28T06:37:25.343Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:14.573 Verification LBA range: start 0x8000 length 0x8000 00:12:14.574 nvme1n3 : 5.47 237.17 14.82 0.00 0.00 500750.54 57671.68 606560.89 00:12:14.574 [2024-11-28T06:37:25.344Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:14.574 Verification LBA range: start 0x0 length 0xbd0b 00:12:14.574 nvme2n1 : 5.56 281.83 17.61 0.00 0.00 415982.06 20769.87 648503.93 00:12:14.574 [2024-11-28T06:37:25.344Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:14.574 Verification LBA range: start 0xbd0b length 0xbd0b 00:12:14.574 nvme2n1 : 5.51 252.25 15.77 0.00 0.00 465114.70 6553.60 603334.50 00:12:14.574 [2024-11-28T06:37:25.344Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:14.574 Verification LBA range: start 0x0 length 0xa000 00:12:14.574 nvme3n1 : 5.56 282.43 17.65 0.00 0.00 405708.36 4209.43 474278.99 00:12:14.574 [2024-11-28T06:37:25.344Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:14.574 Verification LBA range: start 0xa000 length 0xa000 00:12:14.574 nvme3n1 : 5.52 284.34 17.77 0.00 0.00 410894.79 1008.25 709805.29 00:12:14.574 [2024-11-28T06:37:25.344Z] =================================================================================================================== 00:12:14.574 [2024-11-28T06:37:25.344Z] Total : 3026.72 189.17 0.00 0.00 474509.19 1008.25 709805.29 00:12:14.832 00:12:14.832 real 0m6.165s 00:12:14.832 user 0m11.390s 00:12:14.832 sys 0m0.406s 00:12:14.832 06:37:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:14.832 06:37:25 -- common/autotest_common.sh@10 -- # set +x 00:12:14.832 ************************************ 00:12:14.832 END TEST bdev_verify_big_io 00:12:14.832 ************************************ 00:12:14.832 06:37:25 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:14.832 06:37:25 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:14.832 06:37:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:14.832 06:37:25 -- common/autotest_common.sh@10 -- # set +x 00:12:14.832 ************************************ 00:12:14.832 START TEST bdev_write_zeroes 00:12:14.832 ************************************ 00:12:14.832 06:37:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:14.832 [2024-11-28 06:37:25.437808] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:14.832 [2024-11-28 06:37:25.437894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79756 ] 00:12:14.832 [2024-11-28 06:37:25.568525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.832 [2024-11-28 06:37:25.598843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.091 Running I/O for 1 seconds... 00:12:16.468 00:12:16.468 Latency(us) 00:12:16.468 [2024-11-28T06:37:27.238Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:16.468 [2024-11-28T06:37:27.238Z] Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:16.468 nvme0n1 : 1.01 11758.02 45.93 0.00 0.00 10873.29 8116.38 14115.45 00:12:16.468 [2024-11-28T06:37:27.238Z] Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:16.468 nvme1n1 : 1.02 11718.54 45.78 0.00 0.00 10903.50 8217.21 15022.87 00:12:16.468 [2024-11-28T06:37:27.238Z] Job: nvme1n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:16.468 nvme1n2 : 1.02 11680.20 45.63 0.00 0.00 10931.39 8318.03 17543.48 00:12:16.468 [2024-11-28T06:37:27.238Z] Job: nvme1n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:16.468 nvme1n3 : 1.02 11665.37 45.57 0.00 0.00 10940.19 8418.86 18551.73 00:12:16.468 [2024-11-28T06:37:27.238Z] Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:16.468 nvme2n1 : 1.03 22740.28 88.83 0.00 0.00 5605.01 3327.21 16636.06 00:12:16.468 [2024-11-28T06:37:27.238Z] Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:16.468 nvme3n1 : 1.03 11591.02 45.28 0.00 0.00 10953.06 5192.47 21475.64 00:12:16.468 [2024-11-28T06:37:27.238Z] =================================================================================================================== 00:12:16.468 [2024-11-28T06:37:27.238Z] Total : 81153.43 317.01 0.00 0.00 9423.63 3327.21 21475.64 00:12:16.468 00:12:16.468 real 0m1.576s 00:12:16.468 user 0m0.864s 00:12:16.468 sys 0m0.551s 00:12:16.468 06:37:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:16.468 06:37:26 -- common/autotest_common.sh@10 -- # set +x 00:12:16.468 ************************************ 00:12:16.468 END TEST bdev_write_zeroes 00:12:16.468 ************************************ 00:12:16.468 06:37:26 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:16.468 06:37:26 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:16.468 06:37:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:16.468 06:37:27 -- common/autotest_common.sh@10 -- # set +x 00:12:16.468 ************************************ 00:12:16.468 START TEST bdev_json_nonenclosed 00:12:16.468 ************************************ 00:12:16.468 06:37:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:16.468 [2024-11-28 06:37:27.057043] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:16.468 [2024-11-28 06:37:27.057147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79798 ] 00:12:16.468 [2024-11-28 06:37:27.192632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.468 [2024-11-28 06:37:27.222438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.468 [2024-11-28 06:37:27.222795] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:16.468 [2024-11-28 06:37:27.222827] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:16.727 ************************************ 00:12:16.727 END TEST bdev_json_nonenclosed 00:12:16.727 ************************************ 00:12:16.727 00:12:16.727 real 0m0.291s 00:12:16.727 user 0m0.111s 00:12:16.727 sys 0m0.077s 00:12:16.727 06:37:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:16.727 06:37:27 -- common/autotest_common.sh@10 -- # set +x 00:12:16.727 06:37:27 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:16.727 06:37:27 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:16.727 06:37:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:16.727 06:37:27 -- common/autotest_common.sh@10 -- # set +x 00:12:16.727 ************************************ 00:12:16.727 START TEST bdev_json_nonarray 00:12:16.727 ************************************ 00:12:16.727 06:37:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:16.727 [2024-11-28 06:37:27.387974] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:16.727 [2024-11-28 06:37:27.388067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79818 ] 00:12:16.986 [2024-11-28 06:37:27.520894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.986 [2024-11-28 06:37:27.550614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.986 [2024-11-28 06:37:27.550977] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:16.986 [2024-11-28 06:37:27.551002] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:16.986 00:12:16.986 real 0m0.284s 00:12:16.986 user 0m0.112s 00:12:16.986 sys 0m0.069s 00:12:16.986 06:37:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:16.986 06:37:27 -- common/autotest_common.sh@10 -- # set +x 00:12:16.986 ************************************ 00:12:16.986 END TEST bdev_json_nonarray 00:12:16.986 ************************************ 00:12:16.986 06:37:27 -- bdev/blockdev.sh@785 -- # [[ xnvme == bdev ]] 00:12:16.986 06:37:27 -- bdev/blockdev.sh@792 -- # [[ xnvme == gpt ]] 00:12:16.986 06:37:27 -- bdev/blockdev.sh@796 -- # [[ xnvme == crypto_sw ]] 00:12:16.986 06:37:27 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:12:16.986 06:37:27 -- bdev/blockdev.sh@809 -- # cleanup 00:12:16.986 06:37:27 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:12:16.986 06:37:27 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:16.986 06:37:27 -- bdev/blockdev.sh@24 -- # [[ xnvme == rbd ]] 00:12:16.986 06:37:27 -- bdev/blockdev.sh@28 -- # [[ xnvme == daos ]] 00:12:16.986 06:37:27 -- bdev/blockdev.sh@32 -- # [[ xnvme = \g\p\t ]] 00:12:16.986 06:37:27 -- bdev/blockdev.sh@38 -- # [[ xnvme == xnvme ]] 00:12:16.987 06:37:27 -- bdev/blockdev.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:17.922 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:36.022 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:12:36.022 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:12:36.022 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:13:08.098 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:13:08.098 00:13:08.098 real 1m31.645s 00:13:08.098 user 1m20.904s 00:13:08.098 sys 2m54.450s 00:13:08.098 06:38:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:08.098 06:38:15 -- common/autotest_common.sh@10 -- # set +x 00:13:08.098 ************************************ 00:13:08.098 END TEST blockdev_xnvme 00:13:08.098 ************************************ 00:13:08.098 06:38:15 -- spdk/autotest.sh@246 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:08.098 06:38:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:08.098 06:38:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:08.098 06:38:15 -- common/autotest_common.sh@10 -- # set +x 00:13:08.098 ************************************ 00:13:08.098 START TEST ublk 00:13:08.098 ************************************ 00:13:08.098 06:38:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:08.098 * Looking for test storage... 00:13:08.098 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:08.098 06:38:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:08.098 06:38:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:08.098 06:38:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:08.098 06:38:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:08.098 06:38:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:08.098 06:38:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:08.098 06:38:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:08.098 06:38:15 -- scripts/common.sh@335 -- # IFS=.-: 00:13:08.098 06:38:15 -- scripts/common.sh@335 -- # read -ra ver1 00:13:08.098 06:38:15 -- scripts/common.sh@336 -- # IFS=.-: 00:13:08.098 06:38:15 -- scripts/common.sh@336 -- # read -ra ver2 00:13:08.098 06:38:15 -- scripts/common.sh@337 -- # local 'op=<' 00:13:08.098 06:38:15 -- scripts/common.sh@339 -- # ver1_l=2 00:13:08.098 06:38:15 -- scripts/common.sh@340 -- # ver2_l=1 00:13:08.098 06:38:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:08.098 06:38:15 -- scripts/common.sh@343 -- # case "$op" in 00:13:08.098 06:38:15 -- scripts/common.sh@344 -- # : 1 00:13:08.098 06:38:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:08.098 06:38:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:08.098 06:38:15 -- scripts/common.sh@364 -- # decimal 1 00:13:08.098 06:38:15 -- scripts/common.sh@352 -- # local d=1 00:13:08.098 06:38:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:08.098 06:38:15 -- scripts/common.sh@354 -- # echo 1 00:13:08.098 06:38:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:08.098 06:38:15 -- scripts/common.sh@365 -- # decimal 2 00:13:08.098 06:38:15 -- scripts/common.sh@352 -- # local d=2 00:13:08.098 06:38:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:08.098 06:38:15 -- scripts/common.sh@354 -- # echo 2 00:13:08.098 06:38:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:08.098 06:38:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:08.098 06:38:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:08.098 06:38:15 -- scripts/common.sh@367 -- # return 0 00:13:08.098 06:38:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:08.098 06:38:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:08.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.098 --rc genhtml_branch_coverage=1 00:13:08.098 --rc genhtml_function_coverage=1 00:13:08.098 --rc genhtml_legend=1 00:13:08.098 --rc geninfo_all_blocks=1 00:13:08.098 --rc geninfo_unexecuted_blocks=1 00:13:08.098 00:13:08.098 ' 00:13:08.098 06:38:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:08.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.098 --rc genhtml_branch_coverage=1 00:13:08.098 --rc genhtml_function_coverage=1 00:13:08.098 --rc genhtml_legend=1 00:13:08.098 --rc geninfo_all_blocks=1 00:13:08.098 --rc geninfo_unexecuted_blocks=1 00:13:08.098 00:13:08.098 ' 00:13:08.098 06:38:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:08.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.098 --rc genhtml_branch_coverage=1 00:13:08.098 --rc genhtml_function_coverage=1 00:13:08.098 --rc genhtml_legend=1 00:13:08.099 --rc geninfo_all_blocks=1 00:13:08.099 --rc geninfo_unexecuted_blocks=1 00:13:08.099 00:13:08.099 ' 00:13:08.099 06:38:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:08.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.099 --rc genhtml_branch_coverage=1 00:13:08.099 --rc genhtml_function_coverage=1 00:13:08.099 --rc genhtml_legend=1 00:13:08.099 --rc geninfo_all_blocks=1 00:13:08.099 --rc geninfo_unexecuted_blocks=1 00:13:08.099 00:13:08.099 ' 00:13:08.099 06:38:15 -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:08.099 06:38:15 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:08.099 06:38:15 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:08.099 06:38:15 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:08.099 06:38:15 -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:08.099 06:38:15 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:08.099 06:38:15 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:08.099 06:38:15 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:08.099 06:38:15 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:08.099 06:38:15 -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:08.099 06:38:15 -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:08.099 06:38:15 -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:08.099 06:38:15 -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:08.099 06:38:15 -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:08.099 06:38:15 -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:08.099 06:38:15 -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:08.099 06:38:15 -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:08.099 06:38:15 -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:08.099 06:38:15 -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:08.099 06:38:15 -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:08.099 06:38:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:08.099 06:38:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:08.099 06:38:15 -- common/autotest_common.sh@10 -- # set +x 00:13:08.099 ************************************ 00:13:08.099 START TEST test_save_ublk_config 00:13:08.099 ************************************ 00:13:08.099 06:38:15 -- common/autotest_common.sh@1114 -- # test_save_config 00:13:08.099 06:38:15 -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:08.099 06:38:15 -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:08.099 06:38:15 -- ublk/ublk.sh@103 -- # tgtpid=80235 00:13:08.099 06:38:15 -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:08.099 06:38:15 -- ublk/ublk.sh@106 -- # waitforlisten 80235 00:13:08.099 06:38:15 -- common/autotest_common.sh@829 -- # '[' -z 80235 ']' 00:13:08.099 06:38:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.099 06:38:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:08.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.099 06:38:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.099 06:38:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:08.099 06:38:15 -- common/autotest_common.sh@10 -- # set +x 00:13:08.099 [2024-11-28 06:38:15.986749] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:08.099 [2024-11-28 06:38:15.987004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80235 ] 00:13:08.099 [2024-11-28 06:38:16.120783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.099 [2024-11-28 06:38:16.163605] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:08.099 [2024-11-28 06:38:16.163930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.099 06:38:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:08.099 06:38:16 -- common/autotest_common.sh@862 -- # return 0 00:13:08.099 06:38:16 -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:08.099 06:38:16 -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:08.099 06:38:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.099 06:38:16 -- common/autotest_common.sh@10 -- # set +x 00:13:08.099 [2024-11-28 06:38:16.762935] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:08.099 malloc0 00:13:08.099 [2024-11-28 06:38:16.786825] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:08.099 [2024-11-28 06:38:16.786895] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:08.099 [2024-11-28 06:38:16.786905] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:08.099 [2024-11-28 06:38:16.786914] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:08.099 [2024-11-28 06:38:16.795791] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:08.099 [2024-11-28 06:38:16.795819] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:08.099 [2024-11-28 06:38:16.802730] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:08.099 [2024-11-28 06:38:16.802822] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:08.099 [2024-11-28 06:38:16.819720] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:08.099 0 00:13:08.099 06:38:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.099 06:38:16 -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:08.099 06:38:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.099 06:38:16 -- common/autotest_common.sh@10 -- # set +x 00:13:08.099 06:38:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.099 06:38:17 -- ublk/ublk.sh@115 -- # config='{ 00:13:08.099 "subsystems": [ 00:13:08.099 { 00:13:08.099 "subsystem": "iobuf", 00:13:08.099 "config": [ 00:13:08.099 { 00:13:08.099 "method": "iobuf_set_options", 00:13:08.099 "params": { 00:13:08.099 "small_pool_count": 8192, 00:13:08.099 "large_pool_count": 1024, 00:13:08.099 "small_bufsize": 8192, 00:13:08.099 "large_bufsize": 135168 00:13:08.099 } 00:13:08.099 } 00:13:08.099 ] 00:13:08.099 }, 00:13:08.099 { 00:13:08.099 "subsystem": "sock", 00:13:08.099 "config": [ 00:13:08.099 { 00:13:08.099 "method": "sock_impl_set_options", 00:13:08.099 "params": { 00:13:08.099 "impl_name": "posix", 00:13:08.099 "recv_buf_size": 2097152, 00:13:08.099 "send_buf_size": 2097152, 00:13:08.099 "enable_recv_pipe": true, 00:13:08.099 "enable_quickack": false, 00:13:08.099 "enable_placement_id": 0, 00:13:08.099 "enable_zerocopy_send_server": true, 00:13:08.099 "enable_zerocopy_send_client": false, 00:13:08.099 "zerocopy_threshold": 0, 00:13:08.099 "tls_version": 0, 00:13:08.099 "enable_ktls": false 00:13:08.099 } 00:13:08.099 }, 00:13:08.099 { 00:13:08.099 "method": "sock_impl_set_options", 00:13:08.099 "params": { 00:13:08.099 "impl_name": "ssl", 00:13:08.099 "recv_buf_size": 4096, 00:13:08.099 "send_buf_size": 4096, 00:13:08.099 "enable_recv_pipe": true, 00:13:08.099 "enable_quickack": false, 00:13:08.099 "enable_placement_id": 0, 00:13:08.099 "enable_zerocopy_send_server": true, 00:13:08.099 "enable_zerocopy_send_client": false, 00:13:08.099 "zerocopy_threshold": 0, 00:13:08.099 "tls_version": 0, 00:13:08.099 "enable_ktls": false 00:13:08.099 } 00:13:08.099 } 00:13:08.099 ] 00:13:08.099 }, 00:13:08.099 { 00:13:08.099 "subsystem": "vmd", 00:13:08.099 "config": [] 00:13:08.099 }, 00:13:08.099 { 00:13:08.099 "subsystem": "accel", 00:13:08.100 "config": [ 00:13:08.100 { 00:13:08.100 "method": "accel_set_options", 00:13:08.100 "params": { 00:13:08.100 "small_cache_size": 128, 00:13:08.100 "large_cache_size": 16, 00:13:08.100 "task_count": 2048, 00:13:08.100 "sequence_count": 2048, 00:13:08.100 "buf_count": 2048 00:13:08.100 } 00:13:08.100 } 00:13:08.100 ] 00:13:08.100 }, 00:13:08.100 { 00:13:08.100 "subsystem": "bdev", 00:13:08.100 "config": [ 00:13:08.100 { 00:13:08.100 "method": "bdev_set_options", 00:13:08.100 "params": { 00:13:08.100 "bdev_io_pool_size": 65535, 00:13:08.100 "bdev_io_cache_size": 256, 00:13:08.100 "bdev_auto_examine": true, 00:13:08.100 "iobuf_small_cache_size": 128, 00:13:08.100 "iobuf_large_cache_size": 16 00:13:08.100 } 00:13:08.100 }, 00:13:08.100 { 00:13:08.100 "method": "bdev_raid_set_options", 00:13:08.100 "params": { 00:13:08.100 "process_window_size_kb": 1024 00:13:08.100 } 00:13:08.100 }, 00:13:08.100 { 00:13:08.100 "method": "bdev_iscsi_set_options", 00:13:08.100 "params": { 00:13:08.100 "timeout_sec": 30 00:13:08.100 } 00:13:08.100 }, 00:13:08.100 { 00:13:08.100 "method": "bdev_nvme_set_options", 00:13:08.100 "params": { 00:13:08.100 "action_on_timeout": "none", 00:13:08.100 "timeout_us": 0, 00:13:08.100 "timeout_admin_us": 0, 00:13:08.100 "keep_alive_timeout_ms": 10000, 00:13:08.100 "transport_retry_count": 4, 00:13:08.100 "arbitration_burst": 0, 00:13:08.100 "low_priority_weight": 0, 00:13:08.100 "medium_priority_weight": 0, 00:13:08.100 "high_priority_weight": 0, 00:13:08.100 "nvme_adminq_poll_period_us": 10000, 00:13:08.100 "nvme_ioq_poll_period_us": 0, 00:13:08.100 "io_queue_requests": 0, 00:13:08.100 "delay_cmd_submit": true, 00:13:08.100 "bdev_retry_count": 3, 00:13:08.100 "transport_ack_timeout": 0, 00:13:08.100 "ctrlr_loss_timeout_sec": 0, 00:13:08.100 "reconnect_delay_sec": 0, 00:13:08.100 "fast_io_fail_timeout_sec": 0, 00:13:08.100 "generate_uuids": false, 00:13:08.100 "transport_tos": 0, 00:13:08.100 "io_path_stat": false, 00:13:08.100 "allow_accel_sequence": false 00:13:08.100 } 00:13:08.100 }, 00:13:08.100 { 00:13:08.100 "method": "bdev_nvme_set_hotplug", 00:13:08.100 "params": { 00:13:08.100 "period_us": 100000, 00:13:08.100 "enable": false 00:13:08.100 } 00:13:08.100 }, 00:13:08.100 { 00:13:08.100 "method": "bdev_malloc_create", 00:13:08.100 "params": { 00:13:08.100 "name": "malloc0", 00:13:08.100 "num_blocks": 8192, 00:13:08.100 "block_size": 4096, 00:13:08.100 "physical_block_size": 4096, 00:13:08.100 "uuid": "cf0f6c28-3d6c-48ce-a254-8673cb7394a3", 00:13:08.100 "optimal_io_boundary": 0 00:13:08.100 } 00:13:08.100 }, 00:13:08.100 { 00:13:08.100 "method": "bdev_wait_for_examine" 00:13:08.100 } 00:13:08.100 ] 00:13:08.100 }, 00:13:08.100 { 00:13:08.100 "subsystem": "scsi", 00:13:08.100 "config": null 00:13:08.100 }, 00:13:08.100 { 00:13:08.100 "subsystem": "scheduler", 00:13:08.100 "config": [ 00:13:08.100 { 00:13:08.100 "method": "framework_set_scheduler", 00:13:08.100 "params": { 00:13:08.100 "name": "static" 00:13:08.100 } 00:13:08.100 } 00:13:08.100 ] 00:13:08.100 }, 00:13:08.100 { 00:13:08.100 "subsystem": "vhost_scsi", 00:13:08.100 "config": [] 00:13:08.100 }, 00:13:08.100 { 00:13:08.100 "subsystem": "vhost_blk", 00:13:08.100 "config": [] 00:13:08.100 }, 00:13:08.100 { 00:13:08.100 "subsystem": "ublk", 00:13:08.100 "config": [ 00:13:08.100 { 00:13:08.100 "method": "ublk_create_target", 00:13:08.100 "params": { 00:13:08.100 "cpumask": "1" 00:13:08.100 } 00:13:08.100 }, 00:13:08.100 { 00:13:08.100 "method": "ublk_start_disk", 00:13:08.100 "params": { 00:13:08.100 "bdev_name": "malloc0", 00:13:08.100 "ublk_id": 0, 00:13:08.100 "num_queues": 1, 00:13:08.100 "queue_depth": 128 00:13:08.100 } 00:13:08.100 } 00:13:08.100 ] 00:13:08.100 }, 00:13:08.100 { 00:13:08.100 "subsystem": "nbd", 00:13:08.100 "config": [] 00:13:08.100 }, 00:13:08.100 { 00:13:08.100 "subsystem": "nvmf", 00:13:08.100 "config": [ 00:13:08.100 { 00:13:08.100 "method": "nvmf_set_config", 00:13:08.100 "params": { 00:13:08.100 "discovery_filter": "match_any", 00:13:08.100 "admin_cmd_passthru": { 00:13:08.100 "identify_ctrlr": false 00:13:08.100 } 00:13:08.100 } 00:13:08.100 }, 00:13:08.100 { 00:13:08.100 "method": "nvmf_set_max_subsystems", 00:13:08.100 "params": { 00:13:08.100 "max_subsystems": 1024 00:13:08.100 } 00:13:08.100 }, 00:13:08.100 { 00:13:08.100 "method": "nvmf_set_crdt", 00:13:08.100 "params": { 00:13:08.100 "crdt1": 0, 00:13:08.100 "crdt2": 0, 00:13:08.100 "crdt3": 0 00:13:08.100 } 00:13:08.100 } 00:13:08.100 ] 00:13:08.100 }, 00:13:08.100 { 00:13:08.100 "subsystem": "iscsi", 00:13:08.100 "config": [ 00:13:08.100 { 00:13:08.100 "method": "iscsi_set_options", 00:13:08.100 "params": { 00:13:08.100 "node_base": "iqn.2016-06.io.spdk", 00:13:08.100 "max_sessions": 128, 00:13:08.100 "max_connections_per_session": 2, 00:13:08.100 "max_queue_depth": 64, 00:13:08.100 "default_time2wait": 2, 00:13:08.100 "default_time2retain": 20, 00:13:08.100 "first_burst_length": 8192, 00:13:08.100 "immediate_data": true, 00:13:08.100 "allow_duplicated_isid": false, 00:13:08.100 "error_recovery_level": 0, 00:13:08.100 "nop_timeout": 60, 00:13:08.100 "nop_in_interval": 30, 00:13:08.100 "disable_chap": false, 00:13:08.100 "require_chap": false, 00:13:08.100 "mutual_chap": false, 00:13:08.100 "chap_group": 0, 00:13:08.100 "max_large_datain_per_connection": 64, 00:13:08.100 "max_r2t_per_connection": 4, 00:13:08.100 "pdu_pool_size": 36864, 00:13:08.100 "immediate_data_pool_size": 16384, 00:13:08.100 "data_out_pool_size": 2048 00:13:08.100 } 00:13:08.100 } 00:13:08.100 ] 00:13:08.100 } 00:13:08.100 ] 00:13:08.100 }' 00:13:08.100 06:38:17 -- ublk/ublk.sh@116 -- # killprocess 80235 00:13:08.100 06:38:17 -- common/autotest_common.sh@936 -- # '[' -z 80235 ']' 00:13:08.100 06:38:17 -- common/autotest_common.sh@940 -- # kill -0 80235 00:13:08.101 06:38:17 -- common/autotest_common.sh@941 -- # uname 00:13:08.101 06:38:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:08.101 06:38:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80235 00:13:08.101 killing process with pid 80235 00:13:08.101 06:38:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:08.101 06:38:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:08.101 06:38:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80235' 00:13:08.101 06:38:17 -- common/autotest_common.sh@955 -- # kill 80235 00:13:08.101 06:38:17 -- common/autotest_common.sh@960 -- # wait 80235 00:13:08.101 [2024-11-28 06:38:17.277428] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:08.101 [2024-11-28 06:38:17.306328] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:08.101 [2024-11-28 06:38:17.306453] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:08.101 [2024-11-28 06:38:17.313748] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:08.101 [2024-11-28 06:38:17.313804] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:08.101 [2024-11-28 06:38:17.313812] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:08.101 [2024-11-28 06:38:17.313837] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:08.101 [2024-11-28 06:38:17.313967] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:08.101 06:38:17 -- ublk/ublk.sh@119 -- # tgtpid=80263 00:13:08.101 06:38:17 -- ublk/ublk.sh@121 -- # waitforlisten 80263 00:13:08.101 06:38:17 -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:08.101 06:38:17 -- common/autotest_common.sh@829 -- # '[' -z 80263 ']' 00:13:08.101 06:38:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.101 06:38:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:08.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.101 06:38:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.101 06:38:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:08.101 06:38:17 -- common/autotest_common.sh@10 -- # set +x 00:13:08.101 06:38:17 -- ublk/ublk.sh@118 -- # echo '{ 00:13:08.101 "subsystems": [ 00:13:08.101 { 00:13:08.101 "subsystem": "iobuf", 00:13:08.101 "config": [ 00:13:08.101 { 00:13:08.101 "method": "iobuf_set_options", 00:13:08.101 "params": { 00:13:08.101 "small_pool_count": 8192, 00:13:08.101 "large_pool_count": 1024, 00:13:08.101 "small_bufsize": 8192, 00:13:08.101 "large_bufsize": 135168 00:13:08.101 } 00:13:08.101 } 00:13:08.101 ] 00:13:08.101 }, 00:13:08.101 { 00:13:08.101 "subsystem": "sock", 00:13:08.101 "config": [ 00:13:08.101 { 00:13:08.101 "method": "sock_impl_set_options", 00:13:08.101 "params": { 00:13:08.101 "impl_name": "posix", 00:13:08.101 "recv_buf_size": 2097152, 00:13:08.101 "send_buf_size": 2097152, 00:13:08.101 "enable_recv_pipe": true, 00:13:08.101 "enable_quickack": false, 00:13:08.101 "enable_placement_id": 0, 00:13:08.101 "enable_zerocopy_send_server": true, 00:13:08.101 "enable_zerocopy_send_client": false, 00:13:08.101 "zerocopy_threshold": 0, 00:13:08.101 "tls_version": 0, 00:13:08.101 "enable_ktls": false 00:13:08.101 } 00:13:08.101 }, 00:13:08.101 { 00:13:08.101 "method": "sock_impl_set_options", 00:13:08.101 "params": { 00:13:08.101 "impl_name": "ssl", 00:13:08.101 "recv_buf_size": 4096, 00:13:08.101 "send_buf_size": 4096, 00:13:08.101 "enable_recv_pipe": true, 00:13:08.101 "enable_quickack": false, 00:13:08.101 "enable_placement_id": 0, 00:13:08.101 "enable_zerocopy_send_server": true, 00:13:08.101 "enable_zerocopy_send_client": false, 00:13:08.101 "zerocopy_threshold": 0, 00:13:08.101 "tls_version": 0, 00:13:08.101 "enable_ktls": false 00:13:08.101 } 00:13:08.101 } 00:13:08.101 ] 00:13:08.101 }, 00:13:08.101 { 00:13:08.101 "subsystem": "vmd", 00:13:08.101 "config": [] 00:13:08.101 }, 00:13:08.101 { 00:13:08.101 "subsystem": "accel", 00:13:08.101 "config": [ 00:13:08.101 { 00:13:08.101 "method": "accel_set_options", 00:13:08.101 "params": { 00:13:08.101 "small_cache_size": 128, 00:13:08.101 "large_cache_size": 16, 00:13:08.101 "task_count": 2048, 00:13:08.101 "sequence_count": 2048, 00:13:08.101 "buf_count": 2048 00:13:08.101 } 00:13:08.101 } 00:13:08.101 ] 00:13:08.101 }, 00:13:08.101 { 00:13:08.101 "subsystem": "bdev", 00:13:08.101 "config": [ 00:13:08.101 { 00:13:08.101 "method": "bdev_set_options", 00:13:08.101 "params": { 00:13:08.101 "bdev_io_pool_size": 65535, 00:13:08.101 "bdev_io_cache_size": 256, 00:13:08.101 "bdev_auto_examine": true, 00:13:08.101 "iobuf_small_cache_size": 128, 00:13:08.101 "iobuf_large_cache_size": 16 00:13:08.101 } 00:13:08.101 }, 00:13:08.101 { 00:13:08.101 "method": "bdev_raid_set_options", 00:13:08.101 "params": { 00:13:08.101 "process_window_size_kb": 1024 00:13:08.101 } 00:13:08.101 }, 00:13:08.101 { 00:13:08.101 "method": "bdev_iscsi_set_options", 00:13:08.101 "params": { 00:13:08.101 "timeout_sec": 30 00:13:08.101 } 00:13:08.101 }, 00:13:08.101 { 00:13:08.101 "method": "bdev_nvme_set_options", 00:13:08.101 "params": { 00:13:08.101 "action_on_timeout": "none", 00:13:08.101 "timeout_us": 0, 00:13:08.101 "timeout_admin_us": 0, 00:13:08.101 "keep_alive_timeout_ms": 10000, 00:13:08.101 "transport_retry_count": 4, 00:13:08.101 "arbitration_burst": 0, 00:13:08.101 "low_priority_weight": 0, 00:13:08.101 "medium_priority_weight": 0, 00:13:08.101 "high_priority_weight": 0, 00:13:08.101 "nvme_adminq_poll_period_us": 10000, 00:13:08.101 "nvme_ioq_poll_period_us": 0, 00:13:08.101 "io_queue_requests": 0, 00:13:08.101 "delay_cmd_submit": true, 00:13:08.101 "bdev_retry_count": 3, 00:13:08.101 "transport_ack_timeout": 0, 00:13:08.101 "ctrlr_loss_timeout_sec": 0, 00:13:08.101 "reconnect_delay_sec": 0, 00:13:08.101 "fast_io_fail_timeout_sec": 0, 00:13:08.101 "generate_uuids": false, 00:13:08.101 "transport_tos": 0, 00:13:08.101 "io_path_stat": false, 00:13:08.101 "allow_accel_sequence": false 00:13:08.101 } 00:13:08.101 }, 00:13:08.101 { 00:13:08.101 "method": "bdev_nvme_set_hotplug", 00:13:08.101 "params": { 00:13:08.101 "period_us": 100000, 00:13:08.101 "enable": false 00:13:08.101 } 00:13:08.101 }, 00:13:08.101 { 00:13:08.101 "method": "bdev_malloc_create", 00:13:08.101 "params": { 00:13:08.101 "name": "malloc0", 00:13:08.101 "num_blocks": 8192, 00:13:08.101 "block_size": 4096, 00:13:08.101 "physical_block_size": 4096, 00:13:08.101 "uuid": "cf0f6c28-3d6c-48ce-a254-8673cb7394a3", 00:13:08.101 "optimal_io_boundary": 0 00:13:08.101 } 00:13:08.101 }, 00:13:08.101 { 00:13:08.101 "method": "bdev_wait_for_examine" 00:13:08.101 } 00:13:08.101 ] 00:13:08.101 }, 00:13:08.101 { 00:13:08.101 "subsystem": "scsi", 00:13:08.101 "config": null 00:13:08.101 }, 00:13:08.101 { 00:13:08.101 "subsystem": "scheduler", 00:13:08.101 "config": [ 00:13:08.101 { 00:13:08.101 "method": "framework_set_scheduler", 00:13:08.101 "params": { 00:13:08.101 "name": "static" 00:13:08.101 } 00:13:08.101 } 00:13:08.101 ] 00:13:08.101 }, 00:13:08.101 { 00:13:08.101 "subsystem": "vhost_scsi", 00:13:08.101 "config": [] 00:13:08.102 }, 00:13:08.102 { 00:13:08.102 "subsystem": "vhost_blk", 00:13:08.102 "config": [] 00:13:08.102 }, 00:13:08.102 { 00:13:08.102 "subsystem": "ublk", 00:13:08.102 "config": [ 00:13:08.102 { 00:13:08.102 "method": "ublk_create_target", 00:13:08.102 "params": { 00:13:08.102 "cpumask": "1" 00:13:08.102 } 00:13:08.102 }, 00:13:08.102 { 00:13:08.102 "method": "ublk_start_disk", 00:13:08.102 "params": { 00:13:08.102 "bdev_name": "malloc0", 00:13:08.102 "ublk_id": 0, 00:13:08.102 "num_queues": 1, 00:13:08.102 "queue_depth": 128 00:13:08.102 } 00:13:08.102 } 00:13:08.102 ] 00:13:08.102 }, 00:13:08.102 { 00:13:08.102 "subsystem": "nbd", 00:13:08.102 "config": [] 00:13:08.102 }, 00:13:08.102 { 00:13:08.102 "subsystem": "nvmf", 00:13:08.102 "config": [ 00:13:08.102 { 00:13:08.102 "method": "nvmf_set_config", 00:13:08.102 "params": { 00:13:08.102 "discovery_filter": "match_any", 00:13:08.102 "admin_cmd_passthru": { 00:13:08.102 "identify_ctrlr": false 00:13:08.102 } 00:13:08.102 } 00:13:08.102 }, 00:13:08.102 { 00:13:08.102 "method": "nvmf_set_max_subsystems", 00:13:08.102 "params": { 00:13:08.102 "max_subsystems": 1024 00:13:08.102 } 00:13:08.102 }, 00:13:08.102 { 00:13:08.102 "method": "nvmf_set_crdt", 00:13:08.102 "params": { 00:13:08.102 "crdt1": 0, 00:13:08.102 "crdt2": 0, 00:13:08.102 "crdt3": 0 00:13:08.102 } 00:13:08.102 } 00:13:08.102 ] 00:13:08.102 }, 00:13:08.102 { 00:13:08.102 "subsystem": "iscsi", 00:13:08.102 "config": [ 00:13:08.102 { 00:13:08.102 "method": "iscsi_set_options", 00:13:08.102 "params": { 00:13:08.102 "node_base": "iqn.2016-06.io.spdk", 00:13:08.102 "max_sessions": 128, 00:13:08.102 "max_connections_per_session": 2, 00:13:08.102 "max_queue_depth": 64, 00:13:08.102 "default_time2wait": 2, 00:13:08.102 "default_time2retain": 20, 00:13:08.102 "first_burst_length": 8192, 00:13:08.102 "immediate_data": true, 00:13:08.102 "allow_duplicated_isid": false, 00:13:08.102 "error_recovery_level": 0, 00:13:08.102 "nop_timeout": 60, 00:13:08.102 "nop_in_interval": 30, 00:13:08.102 "disable_chap": false, 00:13:08.102 "require_chap": false, 00:13:08.102 "mutual_chap": false, 00:13:08.102 "chap_group": 0, 00:13:08.102 "max_large_datain_per_connection": 64, 00:13:08.102 "max_r2t_per_connection": 4, 00:13:08.102 "pdu_pool_size": 36864, 00:13:08.102 "immediate_data_pool_size": 16384, 00:13:08.102 "data_out_pool_size": 2048 00:13:08.102 } 00:13:08.102 } 00:13:08.102 ] 00:13:08.102 } 00:13:08.102 ] 00:13:08.102 }' 00:13:08.102 [2024-11-28 06:38:17.684835] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:08.102 [2024-11-28 06:38:17.684948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80263 ] 00:13:08.102 [2024-11-28 06:38:17.820399] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.102 [2024-11-28 06:38:17.877766] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:08.102 [2024-11-28 06:38:17.878023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.102 [2024-11-28 06:38:18.145922] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:08.102 [2024-11-28 06:38:18.153822] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:08.102 [2024-11-28 06:38:18.153897] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:08.102 [2024-11-28 06:38:18.153904] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:08.102 [2024-11-28 06:38:18.153914] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:08.102 [2024-11-28 06:38:18.162789] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:08.102 [2024-11-28 06:38:18.162816] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:08.102 [2024-11-28 06:38:18.169731] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:08.102 [2024-11-28 06:38:18.169820] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:08.102 [2024-11-28 06:38:18.186734] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:08.102 06:38:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:08.102 06:38:18 -- common/autotest_common.sh@862 -- # return 0 00:13:08.102 06:38:18 -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:08.102 06:38:18 -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:08.102 06:38:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.102 06:38:18 -- common/autotest_common.sh@10 -- # set +x 00:13:08.102 06:38:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.102 06:38:18 -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:08.102 06:38:18 -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:08.102 06:38:18 -- ublk/ublk.sh@125 -- # killprocess 80263 00:13:08.102 06:38:18 -- common/autotest_common.sh@936 -- # '[' -z 80263 ']' 00:13:08.102 06:38:18 -- common/autotest_common.sh@940 -- # kill -0 80263 00:13:08.102 06:38:18 -- common/autotest_common.sh@941 -- # uname 00:13:08.102 06:38:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:08.102 06:38:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80263 00:13:08.102 killing process with pid 80263 00:13:08.102 06:38:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:08.102 06:38:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:08.102 06:38:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80263' 00:13:08.102 06:38:18 -- common/autotest_common.sh@955 -- # kill 80263 00:13:08.102 06:38:18 -- common/autotest_common.sh@960 -- # wait 80263 00:13:08.102 [2024-11-28 06:38:18.681992] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:08.102 [2024-11-28 06:38:18.712798] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:08.102 [2024-11-28 06:38:18.712917] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:08.102 [2024-11-28 06:38:18.719735] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:08.102 [2024-11-28 06:38:18.719785] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:08.102 [2024-11-28 06:38:18.719792] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:08.102 [2024-11-28 06:38:18.719825] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:08.102 [2024-11-28 06:38:18.719958] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:08.361 06:38:19 -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:08.361 ************************************ 00:13:08.361 END TEST test_save_ublk_config 00:13:08.361 ************************************ 00:13:08.361 00:13:08.361 real 0m3.093s 00:13:08.361 user 0m2.189s 00:13:08.361 sys 0m1.367s 00:13:08.361 06:38:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:08.361 06:38:19 -- common/autotest_common.sh@10 -- # set +x 00:13:08.361 06:38:19 -- ublk/ublk.sh@139 -- # spdk_pid=80319 00:13:08.361 06:38:19 -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:08.361 06:38:19 -- ublk/ublk.sh@141 -- # waitforlisten 80319 00:13:08.361 06:38:19 -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:08.361 06:38:19 -- common/autotest_common.sh@829 -- # '[' -z 80319 ']' 00:13:08.361 06:38:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.361 06:38:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:08.361 06:38:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.361 06:38:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:08.361 06:38:19 -- common/autotest_common.sh@10 -- # set +x 00:13:08.361 [2024-11-28 06:38:19.108901] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:08.361 [2024-11-28 06:38:19.109009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80319 ] 00:13:08.620 [2024-11-28 06:38:19.242738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:08.620 [2024-11-28 06:38:19.272668] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:08.620 [2024-11-28 06:38:19.272948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.620 [2024-11-28 06:38:19.272958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.187 06:38:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:09.187 06:38:19 -- common/autotest_common.sh@862 -- # return 0 00:13:09.187 06:38:19 -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:09.187 06:38:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:09.187 06:38:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:09.187 06:38:19 -- common/autotest_common.sh@10 -- # set +x 00:13:09.187 ************************************ 00:13:09.187 START TEST test_create_ublk 00:13:09.187 ************************************ 00:13:09.187 06:38:19 -- common/autotest_common.sh@1114 -- # test_create_ublk 00:13:09.187 06:38:19 -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:09.187 06:38:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.187 06:38:19 -- common/autotest_common.sh@10 -- # set +x 00:13:09.187 [2024-11-28 06:38:19.862719] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:09.187 06:38:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.187 06:38:19 -- ublk/ublk.sh@33 -- # ublk_target= 00:13:09.187 06:38:19 -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:09.187 06:38:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.187 06:38:19 -- common/autotest_common.sh@10 -- # set +x 00:13:09.187 06:38:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.187 06:38:19 -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:09.187 06:38:19 -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:09.187 06:38:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.187 06:38:19 -- common/autotest_common.sh@10 -- # set +x 00:13:09.187 [2024-11-28 06:38:19.917850] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:09.187 [2024-11-28 06:38:19.918244] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:09.187 [2024-11-28 06:38:19.918253] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:09.187 [2024-11-28 06:38:19.918261] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:09.187 [2024-11-28 06:38:19.926884] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:09.187 [2024-11-28 06:38:19.926909] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:09.187 [2024-11-28 06:38:19.933729] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:09.187 [2024-11-28 06:38:19.944785] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:09.446 [2024-11-28 06:38:19.959804] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:09.446 06:38:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.446 06:38:19 -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:09.446 06:38:19 -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:09.446 06:38:19 -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:09.446 06:38:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.446 06:38:19 -- common/autotest_common.sh@10 -- # set +x 00:13:09.446 06:38:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.446 06:38:19 -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:09.446 { 00:13:09.446 "ublk_device": "/dev/ublkb0", 00:13:09.446 "id": 0, 00:13:09.446 "queue_depth": 512, 00:13:09.446 "num_queues": 4, 00:13:09.446 "bdev_name": "Malloc0" 00:13:09.446 } 00:13:09.446 ]' 00:13:09.446 06:38:19 -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:09.446 06:38:20 -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:09.446 06:38:20 -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:09.446 06:38:20 -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:09.446 06:38:20 -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:09.446 06:38:20 -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:09.446 06:38:20 -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:09.446 06:38:20 -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:09.446 06:38:20 -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:09.446 06:38:20 -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:09.446 06:38:20 -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:09.446 06:38:20 -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:09.446 06:38:20 -- lvol/common.sh@41 -- # local offset=0 00:13:09.446 06:38:20 -- lvol/common.sh@42 -- # local size=134217728 00:13:09.446 06:38:20 -- lvol/common.sh@43 -- # local rw=write 00:13:09.446 06:38:20 -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:09.446 06:38:20 -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:09.446 06:38:20 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:09.446 06:38:20 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:09.446 06:38:20 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:09.446 06:38:20 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:09.446 06:38:20 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:09.705 fio: verification read phase will never start because write phase uses all of runtime 00:13:09.705 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:09.705 fio-3.35 00:13:09.705 Starting 1 process 00:13:19.682 00:13:19.682 fio_test: (groupid=0, jobs=1): err= 0: pid=80358: Thu Nov 28 06:38:30 2024 00:13:19.682 write: IOPS=18.6k, BW=72.5MiB/s (76.0MB/s)(725MiB/10001msec); 0 zone resets 00:13:19.682 clat (usec): min=35, max=8006, avg=53.09, stdev=129.61 00:13:19.682 lat (usec): min=35, max=8006, avg=53.52, stdev=129.63 00:13:19.682 clat percentiles (usec): 00:13:19.682 | 1.00th=[ 39], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 43], 00:13:19.682 | 30.00th=[ 45], 40.00th=[ 46], 50.00th=[ 47], 60.00th=[ 48], 00:13:19.682 | 70.00th=[ 49], 80.00th=[ 51], 90.00th=[ 55], 95.00th=[ 61], 00:13:19.682 | 99.00th=[ 71], 99.50th=[ 77], 99.90th=[ 2835], 99.95th=[ 3720], 00:13:19.682 | 99.99th=[ 4080] 00:13:19.682 bw ( KiB/s): min=27352, max=83680, per=99.78%, avg=74106.32, stdev=15983.31, samples=19 00:13:19.682 iops : min= 6838, max=20920, avg=18526.58, stdev=3995.83, samples=19 00:13:19.682 lat (usec) : 50=75.65%, 100=24.04%, 250=0.09%, 500=0.01%, 750=0.01% 00:13:19.682 lat (usec) : 1000=0.01% 00:13:19.682 lat (msec) : 2=0.05%, 4=0.12%, 10=0.02% 00:13:19.682 cpu : usr=3.05%, sys=13.02%, ctx=185690, majf=0, minf=797 00:13:19.682 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:19.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:19.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:19.682 issued rwts: total=0,185687,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:19.682 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:19.682 00:13:19.682 Run status group 0 (all jobs): 00:13:19.682 WRITE: bw=72.5MiB/s (76.0MB/s), 72.5MiB/s-72.5MiB/s (76.0MB/s-76.0MB/s), io=725MiB (761MB), run=10001-10001msec 00:13:19.682 00:13:19.682 Disk stats (read/write): 00:13:19.682 ublkb0: ios=0/183560, merge=0/0, ticks=0/8414, in_queue=8415, util=99.07% 00:13:19.682 06:38:30 -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:13:19.682 06:38:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.682 06:38:30 -- common/autotest_common.sh@10 -- # set +x 00:13:19.682 [2024-11-28 06:38:30.380375] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:19.682 [2024-11-28 06:38:30.422765] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:19.682 [2024-11-28 06:38:30.423360] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:19.682 [2024-11-28 06:38:30.429789] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:19.682 [2024-11-28 06:38:30.430133] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:19.682 [2024-11-28 06:38:30.430716] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:19.682 06:38:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.682 06:38:30 -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:13:19.682 06:38:30 -- common/autotest_common.sh@650 -- # local es=0 00:13:19.682 06:38:30 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:13:19.682 06:38:30 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:19.682 06:38:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:19.682 06:38:30 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:19.682 06:38:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:19.682 06:38:30 -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:13:19.682 06:38:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.682 06:38:30 -- common/autotest_common.sh@10 -- # set +x 00:13:19.682 [2024-11-28 06:38:30.448801] ublk.c:1049:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:13:19.941 request: 00:13:19.941 { 00:13:19.941 "ublk_id": 0, 00:13:19.941 "method": "ublk_stop_disk", 00:13:19.941 "req_id": 1 00:13:19.941 } 00:13:19.941 Got JSON-RPC error response 00:13:19.941 response: 00:13:19.941 { 00:13:19.941 "code": -19, 00:13:19.941 "message": "No such device" 00:13:19.941 } 00:13:19.941 06:38:30 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:19.941 06:38:30 -- common/autotest_common.sh@653 -- # es=1 00:13:19.941 06:38:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:19.941 06:38:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:19.941 06:38:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:19.941 06:38:30 -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:13:19.941 06:38:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.941 06:38:30 -- common/autotest_common.sh@10 -- # set +x 00:13:19.941 [2024-11-28 06:38:30.461779] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:19.941 [2024-11-28 06:38:30.462733] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:19.941 [2024-11-28 06:38:30.462766] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:19.941 06:38:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.941 06:38:30 -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:19.941 06:38:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.941 06:38:30 -- common/autotest_common.sh@10 -- # set +x 00:13:19.941 06:38:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.941 06:38:30 -- ublk/ublk.sh@57 -- # check_leftover_devices 00:13:19.941 06:38:30 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:19.941 06:38:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.941 06:38:30 -- common/autotest_common.sh@10 -- # set +x 00:13:19.941 06:38:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.941 06:38:30 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:19.941 06:38:30 -- lvol/common.sh@26 -- # jq length 00:13:19.941 06:38:30 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:19.941 06:38:30 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:19.941 06:38:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.941 06:38:30 -- common/autotest_common.sh@10 -- # set +x 00:13:19.941 06:38:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.941 06:38:30 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:19.941 06:38:30 -- lvol/common.sh@28 -- # jq length 00:13:19.941 ************************************ 00:13:19.941 END TEST test_create_ublk 00:13:19.941 ************************************ 00:13:19.941 06:38:30 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:19.941 00:13:19.941 real 0m10.782s 00:13:19.941 user 0m0.607s 00:13:19.941 sys 0m1.377s 00:13:19.941 06:38:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:19.941 06:38:30 -- common/autotest_common.sh@10 -- # set +x 00:13:19.941 06:38:30 -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:13:19.941 06:38:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:19.941 06:38:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:19.941 06:38:30 -- common/autotest_common.sh@10 -- # set +x 00:13:19.941 ************************************ 00:13:19.941 START TEST test_create_multi_ublk 00:13:19.941 ************************************ 00:13:19.941 06:38:30 -- common/autotest_common.sh@1114 -- # test_create_multi_ublk 00:13:19.941 06:38:30 -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:13:19.941 06:38:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.941 06:38:30 -- common/autotest_common.sh@10 -- # set +x 00:13:19.941 [2024-11-28 06:38:30.694595] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:19.941 06:38:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.941 06:38:30 -- ublk/ublk.sh@62 -- # ublk_target= 00:13:19.941 06:38:30 -- ublk/ublk.sh@64 -- # seq 0 3 00:13:19.941 06:38:30 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:19.941 06:38:30 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:13:19.941 06:38:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.941 06:38:30 -- common/autotest_common.sh@10 -- # set +x 00:13:20.200 06:38:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.200 06:38:30 -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:13:20.200 06:38:30 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:20.200 06:38:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.200 06:38:30 -- common/autotest_common.sh@10 -- # set +x 00:13:20.200 [2024-11-28 06:38:30.777835] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:20.200 [2024-11-28 06:38:30.778135] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:20.200 [2024-11-28 06:38:30.778141] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:20.200 [2024-11-28 06:38:30.778147] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:20.200 [2024-11-28 06:38:30.789766] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:20.200 [2024-11-28 06:38:30.789789] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:20.200 [2024-11-28 06:38:30.801722] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:20.200 [2024-11-28 06:38:30.802202] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:20.200 [2024-11-28 06:38:30.827726] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:20.200 06:38:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.200 06:38:30 -- ublk/ublk.sh@68 -- # ublk_id=0 00:13:20.200 06:38:30 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:20.200 06:38:30 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:13:20.200 06:38:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.200 06:38:30 -- common/autotest_common.sh@10 -- # set +x 00:13:20.200 06:38:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.200 06:38:30 -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:13:20.200 06:38:30 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:13:20.200 06:38:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.200 06:38:30 -- common/autotest_common.sh@10 -- # set +x 00:13:20.200 [2024-11-28 06:38:30.911814] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:13:20.200 [2024-11-28 06:38:30.912114] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:13:20.200 [2024-11-28 06:38:30.912125] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:20.200 [2024-11-28 06:38:30.912130] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:20.200 [2024-11-28 06:38:30.923770] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:20.200 [2024-11-28 06:38:30.923789] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:20.200 [2024-11-28 06:38:30.935728] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:20.201 [2024-11-28 06:38:30.936220] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:20.201 [2024-11-28 06:38:30.960725] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:20.460 06:38:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.460 06:38:30 -- ublk/ublk.sh@68 -- # ublk_id=1 00:13:20.460 06:38:30 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:20.460 06:38:30 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:13:20.460 06:38:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.460 06:38:30 -- common/autotest_common.sh@10 -- # set +x 00:13:20.460 06:38:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.460 06:38:31 -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:13:20.460 06:38:31 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:13:20.460 06:38:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.460 06:38:31 -- common/autotest_common.sh@10 -- # set +x 00:13:20.460 [2024-11-28 06:38:31.043829] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:13:20.460 [2024-11-28 06:38:31.044134] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:13:20.460 [2024-11-28 06:38:31.044148] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:13:20.460 [2024-11-28 06:38:31.044154] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:13:20.460 [2024-11-28 06:38:31.055745] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:20.460 [2024-11-28 06:38:31.055767] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:20.460 [2024-11-28 06:38:31.067728] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:20.460 [2024-11-28 06:38:31.068220] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:13:20.460 [2024-11-28 06:38:31.078721] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:13:20.460 06:38:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.460 06:38:31 -- ublk/ublk.sh@68 -- # ublk_id=2 00:13:20.460 06:38:31 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:20.460 06:38:31 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:13:20.460 06:38:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.460 06:38:31 -- common/autotest_common.sh@10 -- # set +x 00:13:20.460 06:38:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.460 06:38:31 -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:13:20.460 06:38:31 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:13:20.460 06:38:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.460 06:38:31 -- common/autotest_common.sh@10 -- # set +x 00:13:20.460 [2024-11-28 06:38:31.150835] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:13:20.460 [2024-11-28 06:38:31.151139] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:13:20.460 [2024-11-28 06:38:31.151147] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:13:20.460 [2024-11-28 06:38:31.151152] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:13:20.460 [2024-11-28 06:38:31.162742] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:20.460 [2024-11-28 06:38:31.162758] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:20.460 [2024-11-28 06:38:31.174733] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:20.460 [2024-11-28 06:38:31.175208] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:13:20.460 [2024-11-28 06:38:31.198735] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:13:20.460 06:38:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.460 06:38:31 -- ublk/ublk.sh@68 -- # ublk_id=3 00:13:20.460 06:38:31 -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:13:20.460 06:38:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.460 06:38:31 -- common/autotest_common.sh@10 -- # set +x 00:13:20.720 06:38:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.720 06:38:31 -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:13:20.720 { 00:13:20.720 "ublk_device": "/dev/ublkb0", 00:13:20.720 "id": 0, 00:13:20.720 "queue_depth": 512, 00:13:20.720 "num_queues": 4, 00:13:20.720 "bdev_name": "Malloc0" 00:13:20.720 }, 00:13:20.720 { 00:13:20.720 "ublk_device": "/dev/ublkb1", 00:13:20.720 "id": 1, 00:13:20.720 "queue_depth": 512, 00:13:20.720 "num_queues": 4, 00:13:20.720 "bdev_name": "Malloc1" 00:13:20.720 }, 00:13:20.720 { 00:13:20.720 "ublk_device": "/dev/ublkb2", 00:13:20.720 "id": 2, 00:13:20.720 "queue_depth": 512, 00:13:20.720 "num_queues": 4, 00:13:20.720 "bdev_name": "Malloc2" 00:13:20.720 }, 00:13:20.720 { 00:13:20.720 "ublk_device": "/dev/ublkb3", 00:13:20.720 "id": 3, 00:13:20.720 "queue_depth": 512, 00:13:20.720 "num_queues": 4, 00:13:20.720 "bdev_name": "Malloc3" 00:13:20.720 } 00:13:20.720 ]' 00:13:20.720 06:38:31 -- ublk/ublk.sh@72 -- # seq 0 3 00:13:20.720 06:38:31 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:20.720 06:38:31 -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:13:20.720 06:38:31 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:20.720 06:38:31 -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:13:20.720 06:38:31 -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:13:20.720 06:38:31 -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:13:20.720 06:38:31 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:20.720 06:38:31 -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:13:20.720 06:38:31 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:20.720 06:38:31 -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:13:20.720 06:38:31 -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:20.720 06:38:31 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:20.720 06:38:31 -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:13:20.720 06:38:31 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:13:20.720 06:38:31 -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:13:20.720 06:38:31 -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:13:20.720 06:38:31 -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:13:20.980 06:38:31 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:20.980 06:38:31 -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:13:20.980 06:38:31 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:20.980 06:38:31 -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:13:20.980 06:38:31 -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:13:20.980 06:38:31 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:20.980 06:38:31 -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:13:20.980 06:38:31 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:13:20.980 06:38:31 -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:13:20.980 06:38:31 -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:13:20.980 06:38:31 -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:13:20.980 06:38:31 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:20.980 06:38:31 -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:13:20.980 06:38:31 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:20.980 06:38:31 -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:13:20.980 06:38:31 -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:13:20.980 06:38:31 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:20.980 06:38:31 -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:13:21.239 06:38:31 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:13:21.239 06:38:31 -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:13:21.239 06:38:31 -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:13:21.239 06:38:31 -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:13:21.239 06:38:31 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:21.239 06:38:31 -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:13:21.239 06:38:31 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:21.239 06:38:31 -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:13:21.239 06:38:31 -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:13:21.239 06:38:31 -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:13:21.239 06:38:31 -- ublk/ublk.sh@85 -- # seq 0 3 00:13:21.239 06:38:31 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:21.239 06:38:31 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:13:21.239 06:38:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.239 06:38:31 -- common/autotest_common.sh@10 -- # set +x 00:13:21.239 [2024-11-28 06:38:31.898799] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:21.239 [2024-11-28 06:38:31.938751] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:21.239 [2024-11-28 06:38:31.939391] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:21.239 [2024-11-28 06:38:31.946733] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:21.239 [2024-11-28 06:38:31.946948] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:21.239 [2024-11-28 06:38:31.946957] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:21.239 06:38:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.239 06:38:31 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:21.239 06:38:31 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:13:21.239 06:38:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.239 06:38:31 -- common/autotest_common.sh@10 -- # set +x 00:13:21.239 [2024-11-28 06:38:31.961789] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:13:21.239 [2024-11-28 06:38:32.005761] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:21.239 [2024-11-28 06:38:32.006359] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:13:21.497 [2024-11-28 06:38:32.013731] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:21.498 [2024-11-28 06:38:32.013935] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:13:21.498 [2024-11-28 06:38:32.013943] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:13:21.498 06:38:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.498 06:38:32 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:21.498 06:38:32 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:13:21.498 06:38:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.498 06:38:32 -- common/autotest_common.sh@10 -- # set +x 00:13:21.498 [2024-11-28 06:38:32.029773] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:13:21.498 [2024-11-28 06:38:32.061758] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:21.498 [2024-11-28 06:38:32.062331] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:13:21.498 [2024-11-28 06:38:32.069730] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:21.498 [2024-11-28 06:38:32.069940] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:13:21.498 [2024-11-28 06:38:32.069947] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:13:21.498 06:38:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.498 06:38:32 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:21.498 06:38:32 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:13:21.498 06:38:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.498 06:38:32 -- common/autotest_common.sh@10 -- # set +x 00:13:21.498 [2024-11-28 06:38:32.085792] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:13:21.498 [2024-11-28 06:38:32.125752] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:21.498 [2024-11-28 06:38:32.126310] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:13:21.498 [2024-11-28 06:38:32.133729] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:21.498 [2024-11-28 06:38:32.133931] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:13:21.498 [2024-11-28 06:38:32.133939] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:13:21.498 06:38:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.498 06:38:32 -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:13:21.756 [2024-11-28 06:38:32.317780] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:21.756 [2024-11-28 06:38:32.318653] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:21.756 [2024-11-28 06:38:32.318672] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:21.756 06:38:32 -- ublk/ublk.sh@93 -- # seq 0 3 00:13:21.756 06:38:32 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:21.756 06:38:32 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:21.756 06:38:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.756 06:38:32 -- common/autotest_common.sh@10 -- # set +x 00:13:21.756 06:38:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.756 06:38:32 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:21.756 06:38:32 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:13:21.756 06:38:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.756 06:38:32 -- common/autotest_common.sh@10 -- # set +x 00:13:21.756 06:38:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.756 06:38:32 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:21.756 06:38:32 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:21.756 06:38:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.756 06:38:32 -- common/autotest_common.sh@10 -- # set +x 00:13:21.756 06:38:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.756 06:38:32 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:21.756 06:38:32 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:13:21.756 06:38:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.756 06:38:32 -- common/autotest_common.sh@10 -- # set +x 00:13:22.014 06:38:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.014 06:38:32 -- ublk/ublk.sh@96 -- # check_leftover_devices 00:13:22.014 06:38:32 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:22.014 06:38:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.014 06:38:32 -- common/autotest_common.sh@10 -- # set +x 00:13:22.014 06:38:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.014 06:38:32 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:22.014 06:38:32 -- lvol/common.sh@26 -- # jq length 00:13:22.014 06:38:32 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:22.015 06:38:32 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:22.015 06:38:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.015 06:38:32 -- common/autotest_common.sh@10 -- # set +x 00:13:22.015 06:38:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.015 06:38:32 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:22.015 06:38:32 -- lvol/common.sh@28 -- # jq length 00:13:22.015 06:38:32 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:22.015 00:13:22.015 real 0m1.977s 00:13:22.015 user 0m0.812s 00:13:22.015 sys 0m0.154s 00:13:22.015 ************************************ 00:13:22.015 END TEST test_create_multi_ublk 00:13:22.015 ************************************ 00:13:22.015 06:38:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:22.015 06:38:32 -- common/autotest_common.sh@10 -- # set +x 00:13:22.015 06:38:32 -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:13:22.015 06:38:32 -- ublk/ublk.sh@147 -- # cleanup 00:13:22.015 06:38:32 -- ublk/ublk.sh@130 -- # killprocess 80319 00:13:22.015 06:38:32 -- common/autotest_common.sh@936 -- # '[' -z 80319 ']' 00:13:22.015 06:38:32 -- common/autotest_common.sh@940 -- # kill -0 80319 00:13:22.015 06:38:32 -- common/autotest_common.sh@941 -- # uname 00:13:22.015 06:38:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:22.015 06:38:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80319 00:13:22.015 06:38:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:22.015 06:38:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:22.015 killing process with pid 80319 00:13:22.015 06:38:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80319' 00:13:22.015 06:38:32 -- common/autotest_common.sh@955 -- # kill 80319 00:13:22.015 06:38:32 -- common/autotest_common.sh@960 -- # wait 80319 00:13:22.273 [2024-11-28 06:38:32.875719] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:22.273 [2024-11-28 06:38:32.875778] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:22.531 00:13:22.531 real 0m17.371s 00:13:22.531 user 0m27.294s 00:13:22.531 sys 0m7.367s 00:13:22.531 ************************************ 00:13:22.531 END TEST ublk 00:13:22.531 ************************************ 00:13:22.531 06:38:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:22.531 06:38:33 -- common/autotest_common.sh@10 -- # set +x 00:13:22.531 06:38:33 -- spdk/autotest.sh@247 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:22.531 06:38:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:22.531 06:38:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:22.531 06:38:33 -- common/autotest_common.sh@10 -- # set +x 00:13:22.531 ************************************ 00:13:22.531 START TEST ublk_recovery 00:13:22.531 ************************************ 00:13:22.531 06:38:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:22.531 * Looking for test storage... 00:13:22.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:22.531 06:38:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:22.531 06:38:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:22.531 06:38:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:22.850 06:38:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:22.850 06:38:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:22.850 06:38:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:22.850 06:38:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:22.850 06:38:33 -- scripts/common.sh@335 -- # IFS=.-: 00:13:22.850 06:38:33 -- scripts/common.sh@335 -- # read -ra ver1 00:13:22.850 06:38:33 -- scripts/common.sh@336 -- # IFS=.-: 00:13:22.850 06:38:33 -- scripts/common.sh@336 -- # read -ra ver2 00:13:22.850 06:38:33 -- scripts/common.sh@337 -- # local 'op=<' 00:13:22.850 06:38:33 -- scripts/common.sh@339 -- # ver1_l=2 00:13:22.850 06:38:33 -- scripts/common.sh@340 -- # ver2_l=1 00:13:22.850 06:38:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:22.850 06:38:33 -- scripts/common.sh@343 -- # case "$op" in 00:13:22.850 06:38:33 -- scripts/common.sh@344 -- # : 1 00:13:22.850 06:38:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:22.850 06:38:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:22.850 06:38:33 -- scripts/common.sh@364 -- # decimal 1 00:13:22.850 06:38:33 -- scripts/common.sh@352 -- # local d=1 00:13:22.850 06:38:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:22.850 06:38:33 -- scripts/common.sh@354 -- # echo 1 00:13:22.850 06:38:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:22.850 06:38:33 -- scripts/common.sh@365 -- # decimal 2 00:13:22.850 06:38:33 -- scripts/common.sh@352 -- # local d=2 00:13:22.850 06:38:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:22.850 06:38:33 -- scripts/common.sh@354 -- # echo 2 00:13:22.850 06:38:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:22.850 06:38:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:22.850 06:38:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:22.850 06:38:33 -- scripts/common.sh@367 -- # return 0 00:13:22.850 06:38:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:22.850 06:38:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:22.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.850 --rc genhtml_branch_coverage=1 00:13:22.850 --rc genhtml_function_coverage=1 00:13:22.850 --rc genhtml_legend=1 00:13:22.850 --rc geninfo_all_blocks=1 00:13:22.850 --rc geninfo_unexecuted_blocks=1 00:13:22.850 00:13:22.850 ' 00:13:22.850 06:38:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:22.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.850 --rc genhtml_branch_coverage=1 00:13:22.850 --rc genhtml_function_coverage=1 00:13:22.850 --rc genhtml_legend=1 00:13:22.850 --rc geninfo_all_blocks=1 00:13:22.850 --rc geninfo_unexecuted_blocks=1 00:13:22.850 00:13:22.850 ' 00:13:22.850 06:38:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:22.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.850 --rc genhtml_branch_coverage=1 00:13:22.850 --rc genhtml_function_coverage=1 00:13:22.850 --rc genhtml_legend=1 00:13:22.850 --rc geninfo_all_blocks=1 00:13:22.850 --rc geninfo_unexecuted_blocks=1 00:13:22.850 00:13:22.850 ' 00:13:22.850 06:38:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:22.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.850 --rc genhtml_branch_coverage=1 00:13:22.850 --rc genhtml_function_coverage=1 00:13:22.850 --rc genhtml_legend=1 00:13:22.850 --rc geninfo_all_blocks=1 00:13:22.850 --rc geninfo_unexecuted_blocks=1 00:13:22.850 00:13:22.850 ' 00:13:22.850 06:38:33 -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:22.850 06:38:33 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:22.850 06:38:33 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:22.850 06:38:33 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:22.850 06:38:33 -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:22.850 06:38:33 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:22.850 06:38:33 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:22.850 06:38:33 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:22.850 06:38:33 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:22.850 06:38:33 -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:13:22.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.850 06:38:33 -- ublk/ublk_recovery.sh@19 -- # spdk_pid=80680 00:13:22.850 06:38:33 -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:22.850 06:38:33 -- ublk/ublk_recovery.sh@21 -- # waitforlisten 80680 00:13:22.850 06:38:33 -- common/autotest_common.sh@829 -- # '[' -z 80680 ']' 00:13:22.850 06:38:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.850 06:38:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:22.850 06:38:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.850 06:38:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:22.850 06:38:33 -- common/autotest_common.sh@10 -- # set +x 00:13:22.850 06:38:33 -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:22.850 [2024-11-28 06:38:33.378354] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:22.850 [2024-11-28 06:38:33.378439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80680 ] 00:13:22.850 [2024-11-28 06:38:33.507557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:22.850 [2024-11-28 06:38:33.535265] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:22.850 [2024-11-28 06:38:33.535676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.850 [2024-11-28 06:38:33.535735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.434 06:38:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:23.434 06:38:34 -- common/autotest_common.sh@862 -- # return 0 00:13:23.434 06:38:34 -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:13:23.434 06:38:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.434 06:38:34 -- common/autotest_common.sh@10 -- # set +x 00:13:23.434 [2024-11-28 06:38:34.194562] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:23.434 06:38:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.434 06:38:34 -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:23.434 06:38:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.434 06:38:34 -- common/autotest_common.sh@10 -- # set +x 00:13:23.693 malloc0 00:13:23.693 06:38:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.693 06:38:34 -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:13:23.693 06:38:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.693 06:38:34 -- common/autotest_common.sh@10 -- # set +x 00:13:23.693 [2024-11-28 06:38:34.225821] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:13:23.693 [2024-11-28 06:38:34.225907] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:13:23.693 [2024-11-28 06:38:34.225913] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:23.693 [2024-11-28 06:38:34.225919] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:23.693 [2024-11-28 06:38:34.234791] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:23.693 [2024-11-28 06:38:34.234819] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:23.693 [2024-11-28 06:38:34.241725] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:23.693 [2024-11-28 06:38:34.241838] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:23.693 [2024-11-28 06:38:34.264720] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:23.693 1 00:13:23.693 06:38:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.693 06:38:34 -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:13:24.628 06:38:35 -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:13:24.628 06:38:35 -- ublk/ublk_recovery.sh@31 -- # fio_proc=80713 00:13:24.628 06:38:35 -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:13:24.628 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:24.628 fio-3.35 00:13:24.628 Starting 1 process 00:13:29.894 06:38:40 -- ublk/ublk_recovery.sh@36 -- # kill -9 80680 00:13:29.894 06:38:40 -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:13:35.166 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 80680 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:13:35.166 06:38:45 -- ublk/ublk_recovery.sh@42 -- # spdk_pid=80827 00:13:35.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.166 06:38:45 -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:35.166 06:38:45 -- ublk/ublk_recovery.sh@44 -- # waitforlisten 80827 00:13:35.166 06:38:45 -- common/autotest_common.sh@829 -- # '[' -z 80827 ']' 00:13:35.166 06:38:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.166 06:38:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:35.166 06:38:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.166 06:38:45 -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:35.166 06:38:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:35.166 06:38:45 -- common/autotest_common.sh@10 -- # set +x 00:13:35.166 [2024-11-28 06:38:45.353083] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:35.166 [2024-11-28 06:38:45.353198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80827 ] 00:13:35.166 [2024-11-28 06:38:45.487402] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:35.166 [2024-11-28 06:38:45.517803] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:35.166 [2024-11-28 06:38:45.518333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:35.166 [2024-11-28 06:38:45.518490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.424 06:38:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:35.425 06:38:46 -- common/autotest_common.sh@862 -- # return 0 00:13:35.425 06:38:46 -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:13:35.425 06:38:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.425 06:38:46 -- common/autotest_common.sh@10 -- # set +x 00:13:35.425 [2024-11-28 06:38:46.178730] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:35.425 06:38:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.425 06:38:46 -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:35.425 06:38:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.425 06:38:46 -- common/autotest_common.sh@10 -- # set +x 00:13:35.684 malloc0 00:13:35.684 06:38:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.684 06:38:46 -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:13:35.684 06:38:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.684 06:38:46 -- common/autotest_common.sh@10 -- # set +x 00:13:35.684 [2024-11-28 06:38:46.209861] ublk.c:2073:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:13:35.684 [2024-11-28 06:38:46.209903] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:35.684 [2024-11-28 06:38:46.209913] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:35.684 [2024-11-28 06:38:46.217772] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:35.684 [2024-11-28 06:38:46.217795] ublk.c:2002:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:13:35.684 1 00:13:35.684 [2024-11-28 06:38:46.217858] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:13:35.684 06:38:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.684 06:38:46 -- ublk/ublk_recovery.sh@52 -- # wait 80713 00:13:35.684 [2024-11-28 06:38:46.225730] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:13:35.684 [2024-11-28 06:38:46.232395] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:13:35.684 [2024-11-28 06:38:46.239935] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:13:35.684 [2024-11-28 06:38:46.239959] ublk.c: 377:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:14:31.912 00:14:31.912 fio_test: (groupid=0, jobs=1): err= 0: pid=80716: Thu Nov 28 06:39:35 2024 00:14:31.912 read: IOPS=28.7k, BW=112MiB/s (117MB/s)(6723MiB/60001msec) 00:14:31.912 slat (nsec): min=1055, max=761266, avg=4929.29, stdev=1693.65 00:14:31.912 clat (usec): min=620, max=5971.3k, avg=2226.74, stdev=39400.89 00:14:31.912 lat (usec): min=625, max=5971.3k, avg=2231.67, stdev=39400.88 00:14:31.912 clat percentiles (usec): 00:14:31.912 | 1.00th=[ 1614], 5.00th=[ 1696], 10.00th=[ 1713], 20.00th=[ 1745], 00:14:31.912 | 30.00th=[ 1762], 40.00th=[ 1778], 50.00th=[ 1811], 60.00th=[ 1844], 00:14:31.912 | 70.00th=[ 1958], 80.00th=[ 1991], 90.00th=[ 2057], 95.00th=[ 2933], 00:14:31.912 | 99.00th=[ 4686], 99.50th=[ 5342], 99.90th=[ 6521], 99.95th=[ 8029], 00:14:31.912 | 99.99th=[13304] 00:14:31.912 bw ( KiB/s): min=21229, max=137528, per=100.00%, avg=126376.23, stdev=16164.91, samples=108 00:14:31.912 iops : min= 5307, max=34382, avg=31594.06, stdev=4041.24, samples=108 00:14:31.912 write: IOPS=28.7k, BW=112MiB/s (117MB/s)(6716MiB/60001msec); 0 zone resets 00:14:31.912 slat (nsec): min=1124, max=822413, avg=5038.19, stdev=1764.33 00:14:31.912 clat (usec): min=616, max=5971.4k, avg=2226.87, stdev=33139.01 00:14:31.912 lat (usec): min=618, max=5971.4k, avg=2231.91, stdev=33139.00 00:14:31.912 clat percentiles (usec): 00:14:31.912 | 1.00th=[ 1647], 5.00th=[ 1778], 10.00th=[ 1811], 20.00th=[ 1827], 00:14:31.912 | 30.00th=[ 1844], 40.00th=[ 1876], 50.00th=[ 1893], 60.00th=[ 1926], 00:14:31.912 | 70.00th=[ 2057], 80.00th=[ 2089], 90.00th=[ 2147], 95.00th=[ 2868], 00:14:31.912 | 99.00th=[ 4686], 99.50th=[ 5342], 99.90th=[ 6587], 99.95th=[ 7570], 00:14:31.912 | 99.99th=[13304] 00:14:31.912 bw ( KiB/s): min=20822, max=136920, per=100.00%, avg=126274.71, stdev=16202.17, samples=108 00:14:31.912 iops : min= 5205, max=34228, avg=31568.63, stdev=4050.55, samples=108 00:14:31.912 lat (usec) : 750=0.01%, 1000=0.01% 00:14:31.912 lat (msec) : 2=72.61%, 4=25.15%, 10=2.20%, 20=0.02%, >=2000=0.01% 00:14:31.912 cpu : usr=6.28%, sys=29.20%, ctx=114379, majf=0, minf=14 00:14:31.912 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:14:31.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:31.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:31.912 issued rwts: total=1721089,1719423,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:31.912 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:31.912 00:14:31.912 Run status group 0 (all jobs): 00:14:31.912 READ: bw=112MiB/s (117MB/s), 112MiB/s-112MiB/s (117MB/s-117MB/s), io=6723MiB (7050MB), run=60001-60001msec 00:14:31.912 WRITE: bw=112MiB/s (117MB/s), 112MiB/s-112MiB/s (117MB/s-117MB/s), io=6716MiB (7043MB), run=60001-60001msec 00:14:31.912 00:14:31.912 Disk stats (read/write): 00:14:31.912 ublkb1: ios=1717596/1716085, merge=0/0, ticks=3740111/3600535, in_queue=7340646, util=99.93% 00:14:31.912 06:39:35 -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:14:31.912 06:39:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.912 06:39:35 -- common/autotest_common.sh@10 -- # set +x 00:14:31.912 [2024-11-28 06:39:35.526061] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:31.912 [2024-11-28 06:39:35.569738] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:31.912 [2024-11-28 06:39:35.569873] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:31.912 [2024-11-28 06:39:35.578728] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:31.912 [2024-11-28 06:39:35.578833] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:31.912 [2024-11-28 06:39:35.578844] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:31.912 06:39:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.912 06:39:35 -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:14:31.912 06:39:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.912 06:39:35 -- common/autotest_common.sh@10 -- # set +x 00:14:31.912 [2024-11-28 06:39:35.586791] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:31.912 [2024-11-28 06:39:35.587718] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:31.912 [2024-11-28 06:39:35.587751] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:31.912 06:39:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.912 06:39:35 -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:14:31.912 06:39:35 -- ublk/ublk_recovery.sh@59 -- # cleanup 00:14:31.912 06:39:35 -- ublk/ublk_recovery.sh@14 -- # killprocess 80827 00:14:31.912 06:39:35 -- common/autotest_common.sh@936 -- # '[' -z 80827 ']' 00:14:31.912 06:39:35 -- common/autotest_common.sh@940 -- # kill -0 80827 00:14:31.912 06:39:35 -- common/autotest_common.sh@941 -- # uname 00:14:31.912 06:39:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:31.912 06:39:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80827 00:14:31.912 killing process with pid 80827 00:14:31.912 06:39:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:31.912 06:39:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:31.912 06:39:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80827' 00:14:31.912 06:39:35 -- common/autotest_common.sh@955 -- # kill 80827 00:14:31.912 06:39:35 -- common/autotest_common.sh@960 -- # wait 80827 00:14:31.912 [2024-11-28 06:39:35.785787] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:31.912 [2024-11-28 06:39:35.785839] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:31.912 ************************************ 00:14:31.912 END TEST ublk_recovery 00:14:31.912 ************************************ 00:14:31.912 00:14:31.912 real 1m2.872s 00:14:31.912 user 1m41.001s 00:14:31.912 sys 0m35.513s 00:14:31.912 06:39:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:31.912 06:39:36 -- common/autotest_common.sh@10 -- # set +x 00:14:31.912 06:39:36 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:14:31.912 06:39:36 -- spdk/autotest.sh@255 -- # timing_exit lib 00:14:31.912 06:39:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:31.912 06:39:36 -- common/autotest_common.sh@10 -- # set +x 00:14:31.912 06:39:36 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:14:31.912 06:39:36 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:14:31.912 06:39:36 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:14:31.912 06:39:36 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:14:31.912 06:39:36 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:14:31.912 06:39:36 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:14:31.912 06:39:36 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:14:31.912 06:39:36 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:14:31.912 06:39:36 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:14:31.912 06:39:36 -- spdk/autotest.sh@329 -- # '[' 1 -eq 1 ']' 00:14:31.912 06:39:36 -- spdk/autotest.sh@330 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:31.912 06:39:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:31.912 06:39:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:31.912 06:39:36 -- common/autotest_common.sh@10 -- # set +x 00:14:31.912 ************************************ 00:14:31.912 START TEST ftl 00:14:31.912 ************************************ 00:14:31.912 06:39:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:31.912 * Looking for test storage... 00:14:31.912 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:31.912 06:39:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:31.912 06:39:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:31.912 06:39:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:31.912 06:39:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:31.912 06:39:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:31.913 06:39:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:31.913 06:39:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:31.913 06:39:36 -- scripts/common.sh@335 -- # IFS=.-: 00:14:31.913 06:39:36 -- scripts/common.sh@335 -- # read -ra ver1 00:14:31.913 06:39:36 -- scripts/common.sh@336 -- # IFS=.-: 00:14:31.913 06:39:36 -- scripts/common.sh@336 -- # read -ra ver2 00:14:31.913 06:39:36 -- scripts/common.sh@337 -- # local 'op=<' 00:14:31.913 06:39:36 -- scripts/common.sh@339 -- # ver1_l=2 00:14:31.913 06:39:36 -- scripts/common.sh@340 -- # ver2_l=1 00:14:31.913 06:39:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:31.913 06:39:36 -- scripts/common.sh@343 -- # case "$op" in 00:14:31.913 06:39:36 -- scripts/common.sh@344 -- # : 1 00:14:31.913 06:39:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:31.913 06:39:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:31.913 06:39:36 -- scripts/common.sh@364 -- # decimal 1 00:14:31.913 06:39:36 -- scripts/common.sh@352 -- # local d=1 00:14:31.913 06:39:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:31.913 06:39:36 -- scripts/common.sh@354 -- # echo 1 00:14:31.913 06:39:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:31.913 06:39:36 -- scripts/common.sh@365 -- # decimal 2 00:14:31.913 06:39:36 -- scripts/common.sh@352 -- # local d=2 00:14:31.913 06:39:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:31.913 06:39:36 -- scripts/common.sh@354 -- # echo 2 00:14:31.913 06:39:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:31.913 06:39:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:31.913 06:39:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:31.913 06:39:36 -- scripts/common.sh@367 -- # return 0 00:14:31.913 06:39:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:31.913 06:39:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:31.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.913 --rc genhtml_branch_coverage=1 00:14:31.913 --rc genhtml_function_coverage=1 00:14:31.913 --rc genhtml_legend=1 00:14:31.913 --rc geninfo_all_blocks=1 00:14:31.913 --rc geninfo_unexecuted_blocks=1 00:14:31.913 00:14:31.913 ' 00:14:31.913 06:39:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:31.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.913 --rc genhtml_branch_coverage=1 00:14:31.913 --rc genhtml_function_coverage=1 00:14:31.913 --rc genhtml_legend=1 00:14:31.913 --rc geninfo_all_blocks=1 00:14:31.913 --rc geninfo_unexecuted_blocks=1 00:14:31.913 00:14:31.913 ' 00:14:31.913 06:39:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:31.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.913 --rc genhtml_branch_coverage=1 00:14:31.913 --rc genhtml_function_coverage=1 00:14:31.913 --rc genhtml_legend=1 00:14:31.913 --rc geninfo_all_blocks=1 00:14:31.913 --rc geninfo_unexecuted_blocks=1 00:14:31.913 00:14:31.913 ' 00:14:31.913 06:39:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:31.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.913 --rc genhtml_branch_coverage=1 00:14:31.913 --rc genhtml_function_coverage=1 00:14:31.913 --rc genhtml_legend=1 00:14:31.913 --rc geninfo_all_blocks=1 00:14:31.913 --rc geninfo_unexecuted_blocks=1 00:14:31.913 00:14:31.913 ' 00:14:31.913 06:39:36 -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:31.913 06:39:36 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:31.913 06:39:36 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:31.913 06:39:36 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:31.913 06:39:36 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:31.913 06:39:36 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:31.913 06:39:36 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:31.913 06:39:36 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:31.913 06:39:36 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:31.913 06:39:36 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:31.913 06:39:36 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:31.913 06:39:36 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:31.913 06:39:36 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:31.913 06:39:36 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:31.913 06:39:36 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:31.913 06:39:36 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:31.913 06:39:36 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:31.913 06:39:36 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:31.913 06:39:36 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:31.913 06:39:36 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:31.913 06:39:36 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:31.913 06:39:36 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:31.913 06:39:36 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:31.913 06:39:36 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:31.913 06:39:36 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:31.913 06:39:36 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:31.913 06:39:36 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:31.913 06:39:36 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.913 06:39:36 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.913 06:39:36 -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:31.913 06:39:36 -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:14:31.913 06:39:36 -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:14:31.913 06:39:36 -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:14:31.913 06:39:36 -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:14:31.913 06:39:36 -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:31.913 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:31.913 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:31.913 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:31.913 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:31.913 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:31.913 06:39:36 -- ftl/ftl.sh@37 -- # spdk_tgt_pid=81623 00:14:31.913 06:39:36 -- ftl/ftl.sh@38 -- # waitforlisten 81623 00:14:31.913 06:39:36 -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:14:31.913 06:39:36 -- common/autotest_common.sh@829 -- # '[' -z 81623 ']' 00:14:31.913 06:39:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.913 06:39:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.913 06:39:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.913 06:39:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.913 06:39:36 -- common/autotest_common.sh@10 -- # set +x 00:14:31.913 [2024-11-28 06:39:36.772870] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:31.913 [2024-11-28 06:39:36.773307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81623 ] 00:14:31.913 [2024-11-28 06:39:36.904146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.913 [2024-11-28 06:39:36.931520] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:31.913 [2024-11-28 06:39:36.931687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.913 06:39:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:31.913 06:39:37 -- common/autotest_common.sh@862 -- # return 0 00:14:31.913 06:39:37 -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:14:31.913 06:39:37 -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:31.913 06:39:38 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:31.913 06:39:38 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:14:31.913 06:39:38 -- ftl/ftl.sh@46 -- # cache_size=1310720 00:14:31.913 06:39:38 -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:31.913 06:39:38 -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:31.913 06:39:38 -- ftl/ftl.sh@47 -- # cache_disks=0000:00:06.0 00:14:31.913 06:39:38 -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:14:31.913 06:39:38 -- ftl/ftl.sh@49 -- # nv_cache=0000:00:06.0 00:14:31.913 06:39:38 -- ftl/ftl.sh@50 -- # break 00:14:31.913 06:39:38 -- ftl/ftl.sh@53 -- # '[' -z 0000:00:06.0 ']' 00:14:31.913 06:39:38 -- ftl/ftl.sh@59 -- # base_size=1310720 00:14:31.913 06:39:38 -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:31.913 06:39:38 -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:06.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:31.913 06:39:38 -- ftl/ftl.sh@60 -- # base_disks=0000:00:07.0 00:14:31.913 06:39:38 -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:14:31.913 06:39:38 -- ftl/ftl.sh@62 -- # device=0000:00:07.0 00:14:31.913 06:39:38 -- ftl/ftl.sh@63 -- # break 00:14:31.913 06:39:38 -- ftl/ftl.sh@66 -- # killprocess 81623 00:14:31.913 06:39:38 -- common/autotest_common.sh@936 -- # '[' -z 81623 ']' 00:14:31.913 06:39:38 -- common/autotest_common.sh@940 -- # kill -0 81623 00:14:31.913 06:39:38 -- common/autotest_common.sh@941 -- # uname 00:14:31.913 06:39:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:31.913 06:39:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81623 00:14:31.913 killing process with pid 81623 00:14:31.913 06:39:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:31.913 06:39:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:31.913 06:39:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81623' 00:14:31.913 06:39:38 -- common/autotest_common.sh@955 -- # kill 81623 00:14:31.913 06:39:38 -- common/autotest_common.sh@960 -- # wait 81623 00:14:31.913 06:39:39 -- ftl/ftl.sh@68 -- # '[' -z 0000:00:07.0 ']' 00:14:31.914 06:39:39 -- ftl/ftl.sh@73 -- # [[ -z '' ]] 00:14:31.914 06:39:39 -- ftl/ftl.sh@74 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:14:31.914 06:39:39 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:31.914 06:39:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:31.914 06:39:39 -- common/autotest_common.sh@10 -- # set +x 00:14:31.914 ************************************ 00:14:31.914 START TEST ftl_fio_basic 00:14:31.914 ************************************ 00:14:31.914 06:39:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:14:31.914 * Looking for test storage... 00:14:31.914 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:31.914 06:39:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:31.914 06:39:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:31.914 06:39:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:31.914 06:39:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:31.914 06:39:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:31.914 06:39:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:31.914 06:39:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:31.914 06:39:39 -- scripts/common.sh@335 -- # IFS=.-: 00:14:31.914 06:39:39 -- scripts/common.sh@335 -- # read -ra ver1 00:14:31.914 06:39:39 -- scripts/common.sh@336 -- # IFS=.-: 00:14:31.914 06:39:39 -- scripts/common.sh@336 -- # read -ra ver2 00:14:31.914 06:39:39 -- scripts/common.sh@337 -- # local 'op=<' 00:14:31.914 06:39:39 -- scripts/common.sh@339 -- # ver1_l=2 00:14:31.914 06:39:39 -- scripts/common.sh@340 -- # ver2_l=1 00:14:31.914 06:39:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:31.914 06:39:39 -- scripts/common.sh@343 -- # case "$op" in 00:14:31.914 06:39:39 -- scripts/common.sh@344 -- # : 1 00:14:31.914 06:39:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:31.914 06:39:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:31.914 06:39:39 -- scripts/common.sh@364 -- # decimal 1 00:14:31.914 06:39:39 -- scripts/common.sh@352 -- # local d=1 00:14:31.914 06:39:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:31.914 06:39:39 -- scripts/common.sh@354 -- # echo 1 00:14:31.914 06:39:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:31.914 06:39:39 -- scripts/common.sh@365 -- # decimal 2 00:14:31.914 06:39:39 -- scripts/common.sh@352 -- # local d=2 00:14:31.914 06:39:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:31.914 06:39:39 -- scripts/common.sh@354 -- # echo 2 00:14:31.914 06:39:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:31.914 06:39:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:31.914 06:39:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:31.914 06:39:39 -- scripts/common.sh@367 -- # return 0 00:14:31.914 06:39:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:31.914 06:39:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:31.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.914 --rc genhtml_branch_coverage=1 00:14:31.914 --rc genhtml_function_coverage=1 00:14:31.914 --rc genhtml_legend=1 00:14:31.914 --rc geninfo_all_blocks=1 00:14:31.914 --rc geninfo_unexecuted_blocks=1 00:14:31.914 00:14:31.914 ' 00:14:31.914 06:39:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:31.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.914 --rc genhtml_branch_coverage=1 00:14:31.914 --rc genhtml_function_coverage=1 00:14:31.914 --rc genhtml_legend=1 00:14:31.914 --rc geninfo_all_blocks=1 00:14:31.914 --rc geninfo_unexecuted_blocks=1 00:14:31.914 00:14:31.914 ' 00:14:31.914 06:39:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:31.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.914 --rc genhtml_branch_coverage=1 00:14:31.914 --rc genhtml_function_coverage=1 00:14:31.914 --rc genhtml_legend=1 00:14:31.914 --rc geninfo_all_blocks=1 00:14:31.914 --rc geninfo_unexecuted_blocks=1 00:14:31.914 00:14:31.914 ' 00:14:31.914 06:39:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:31.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.914 --rc genhtml_branch_coverage=1 00:14:31.914 --rc genhtml_function_coverage=1 00:14:31.914 --rc genhtml_legend=1 00:14:31.914 --rc geninfo_all_blocks=1 00:14:31.914 --rc geninfo_unexecuted_blocks=1 00:14:31.914 00:14:31.914 ' 00:14:31.914 06:39:39 -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:31.914 06:39:39 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:14:31.914 06:39:39 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:31.914 06:39:39 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:31.914 06:39:39 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:31.914 06:39:39 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:31.914 06:39:39 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:31.914 06:39:39 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:31.914 06:39:39 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:31.914 06:39:39 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:31.914 06:39:39 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:31.914 06:39:39 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:31.914 06:39:39 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:31.914 06:39:39 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:31.914 06:39:39 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:31.914 06:39:39 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:31.914 06:39:39 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:31.914 06:39:39 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:31.914 06:39:39 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:31.914 06:39:39 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:31.914 06:39:39 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:31.914 06:39:39 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:31.914 06:39:39 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:31.914 06:39:39 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:31.914 06:39:39 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:31.914 06:39:39 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:31.914 06:39:39 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:31.914 06:39:39 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.914 06:39:39 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.914 06:39:39 -- ftl/fio.sh@11 -- # declare -A suite 00:14:31.914 06:39:39 -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:31.914 06:39:39 -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:14:31.914 06:39:39 -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:14:31.914 06:39:39 -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:31.914 06:39:39 -- ftl/fio.sh@23 -- # device=0000:00:07.0 00:14:31.914 06:39:39 -- ftl/fio.sh@24 -- # cache_device=0000:00:06.0 00:14:31.914 06:39:39 -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:31.914 06:39:39 -- ftl/fio.sh@26 -- # uuid= 00:14:31.914 06:39:39 -- ftl/fio.sh@27 -- # timeout=240 00:14:31.914 06:39:39 -- ftl/fio.sh@29 -- # [[ y != y ]] 00:14:31.914 06:39:39 -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:14:31.914 06:39:39 -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:14:31.914 06:39:39 -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:14:31.914 06:39:39 -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:31.914 06:39:39 -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:31.914 06:39:39 -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:14:31.914 06:39:39 -- ftl/fio.sh@45 -- # svcpid=81738 00:14:31.914 06:39:39 -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:14:31.914 06:39:39 -- ftl/fio.sh@46 -- # waitforlisten 81738 00:14:31.914 06:39:39 -- common/autotest_common.sh@829 -- # '[' -z 81738 ']' 00:14:31.914 06:39:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.914 06:39:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.914 06:39:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.914 06:39:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.914 06:39:39 -- common/autotest_common.sh@10 -- # set +x 00:14:31.914 [2024-11-28 06:39:39.309121] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:31.914 [2024-11-28 06:39:39.309366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81738 ] 00:14:31.914 [2024-11-28 06:39:39.439861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:31.914 [2024-11-28 06:39:39.468537] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:31.914 [2024-11-28 06:39:39.468961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.914 [2024-11-28 06:39:39.469358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.914 [2024-11-28 06:39:39.469424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.914 06:39:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:31.914 06:39:40 -- common/autotest_common.sh@862 -- # return 0 00:14:31.914 06:39:40 -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:14:31.914 06:39:40 -- ftl/common.sh@54 -- # local name=nvme0 00:14:31.914 06:39:40 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:14:31.914 06:39:40 -- ftl/common.sh@56 -- # local size=103424 00:14:31.914 06:39:40 -- ftl/common.sh@59 -- # local base_bdev 00:14:31.914 06:39:40 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:14:31.914 06:39:40 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:14:31.914 06:39:40 -- ftl/common.sh@62 -- # local base_size 00:14:31.914 06:39:40 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:14:31.914 06:39:40 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:14:31.915 06:39:40 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:31.915 06:39:40 -- common/autotest_common.sh@1369 -- # local bs 00:14:31.915 06:39:40 -- common/autotest_common.sh@1370 -- # local nb 00:14:31.915 06:39:40 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:14:31.915 06:39:40 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:31.915 { 00:14:31.915 "name": "nvme0n1", 00:14:31.915 "aliases": [ 00:14:31.915 "f0e27f2e-dfa8-46cb-a8ab-503b4f0ef40e" 00:14:31.915 ], 00:14:31.915 "product_name": "NVMe disk", 00:14:31.915 "block_size": 4096, 00:14:31.915 "num_blocks": 1310720, 00:14:31.915 "uuid": "f0e27f2e-dfa8-46cb-a8ab-503b4f0ef40e", 00:14:31.915 "assigned_rate_limits": { 00:14:31.915 "rw_ios_per_sec": 0, 00:14:31.915 "rw_mbytes_per_sec": 0, 00:14:31.915 "r_mbytes_per_sec": 0, 00:14:31.915 "w_mbytes_per_sec": 0 00:14:31.915 }, 00:14:31.915 "claimed": false, 00:14:31.915 "zoned": false, 00:14:31.915 "supported_io_types": { 00:14:31.915 "read": true, 00:14:31.915 "write": true, 00:14:31.915 "unmap": true, 00:14:31.915 "write_zeroes": true, 00:14:31.915 "flush": true, 00:14:31.915 "reset": true, 00:14:31.915 "compare": true, 00:14:31.915 "compare_and_write": false, 00:14:31.915 "abort": true, 00:14:31.915 "nvme_admin": true, 00:14:31.915 "nvme_io": true 00:14:31.915 }, 00:14:31.915 "driver_specific": { 00:14:31.915 "nvme": [ 00:14:31.915 { 00:14:31.915 "pci_address": "0000:00:07.0", 00:14:31.915 "trid": { 00:14:31.915 "trtype": "PCIe", 00:14:31.915 "traddr": "0000:00:07.0" 00:14:31.915 }, 00:14:31.915 "ctrlr_data": { 00:14:31.915 "cntlid": 0, 00:14:31.915 "vendor_id": "0x1b36", 00:14:31.915 "model_number": "QEMU NVMe Ctrl", 00:14:31.915 "serial_number": "12341", 00:14:31.915 "firmware_revision": "8.0.0", 00:14:31.915 "subnqn": "nqn.2019-08.org.qemu:12341", 00:14:31.915 "oacs": { 00:14:31.915 "security": 0, 00:14:31.915 "format": 1, 00:14:31.915 "firmware": 0, 00:14:31.915 "ns_manage": 1 00:14:31.915 }, 00:14:31.915 "multi_ctrlr": false, 00:14:31.915 "ana_reporting": false 00:14:31.915 }, 00:14:31.915 "vs": { 00:14:31.915 "nvme_version": "1.4" 00:14:31.915 }, 00:14:31.915 "ns_data": { 00:14:31.915 "id": 1, 00:14:31.915 "can_share": false 00:14:31.915 } 00:14:31.915 } 00:14:31.915 ], 00:14:31.915 "mp_policy": "active_passive" 00:14:31.915 } 00:14:31.915 } 00:14:31.915 ]' 00:14:31.915 06:39:40 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:31.915 06:39:40 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:31.915 06:39:40 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:31.915 06:39:40 -- common/autotest_common.sh@1373 -- # nb=1310720 00:14:31.915 06:39:40 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:14:31.915 06:39:40 -- common/autotest_common.sh@1377 -- # echo 5120 00:14:31.915 06:39:40 -- ftl/common.sh@63 -- # base_size=5120 00:14:31.915 06:39:40 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:14:31.915 06:39:40 -- ftl/common.sh@67 -- # clear_lvols 00:14:31.915 06:39:40 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:31.915 06:39:40 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:14:31.915 06:39:40 -- ftl/common.sh@28 -- # stores= 00:14:31.915 06:39:40 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:14:31.915 06:39:41 -- ftl/common.sh@68 -- # lvs=b5b214cb-ff99-4c1d-b2c8-40fa20e20809 00:14:31.915 06:39:41 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b5b214cb-ff99-4c1d-b2c8-40fa20e20809 00:14:31.915 06:39:41 -- ftl/fio.sh@48 -- # split_bdev=76bb28fa-941c-4522-801a-74cf776edd8f 00:14:31.915 06:39:41 -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:06.0 76bb28fa-941c-4522-801a-74cf776edd8f 00:14:31.915 06:39:41 -- ftl/common.sh@35 -- # local name=nvc0 00:14:31.915 06:39:41 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:14:31.915 06:39:41 -- ftl/common.sh@37 -- # local base_bdev=76bb28fa-941c-4522-801a-74cf776edd8f 00:14:31.915 06:39:41 -- ftl/common.sh@38 -- # local cache_size= 00:14:31.915 06:39:41 -- ftl/common.sh@41 -- # get_bdev_size 76bb28fa-941c-4522-801a-74cf776edd8f 00:14:31.915 06:39:41 -- common/autotest_common.sh@1367 -- # local bdev_name=76bb28fa-941c-4522-801a-74cf776edd8f 00:14:31.915 06:39:41 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:31.915 06:39:41 -- common/autotest_common.sh@1369 -- # local bs 00:14:31.915 06:39:41 -- common/autotest_common.sh@1370 -- # local nb 00:14:31.915 06:39:41 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 76bb28fa-941c-4522-801a-74cf776edd8f 00:14:31.915 06:39:41 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:31.915 { 00:14:31.915 "name": "76bb28fa-941c-4522-801a-74cf776edd8f", 00:14:31.915 "aliases": [ 00:14:31.915 "lvs/nvme0n1p0" 00:14:31.915 ], 00:14:31.915 "product_name": "Logical Volume", 00:14:31.915 "block_size": 4096, 00:14:31.915 "num_blocks": 26476544, 00:14:31.915 "uuid": "76bb28fa-941c-4522-801a-74cf776edd8f", 00:14:31.915 "assigned_rate_limits": { 00:14:31.915 "rw_ios_per_sec": 0, 00:14:31.915 "rw_mbytes_per_sec": 0, 00:14:31.915 "r_mbytes_per_sec": 0, 00:14:31.915 "w_mbytes_per_sec": 0 00:14:31.915 }, 00:14:31.915 "claimed": false, 00:14:31.915 "zoned": false, 00:14:31.915 "supported_io_types": { 00:14:31.915 "read": true, 00:14:31.915 "write": true, 00:14:31.915 "unmap": true, 00:14:31.915 "write_zeroes": true, 00:14:31.915 "flush": false, 00:14:31.915 "reset": true, 00:14:31.915 "compare": false, 00:14:31.915 "compare_and_write": false, 00:14:31.915 "abort": false, 00:14:31.915 "nvme_admin": false, 00:14:31.915 "nvme_io": false 00:14:31.915 }, 00:14:31.915 "driver_specific": { 00:14:31.915 "lvol": { 00:14:31.915 "lvol_store_uuid": "b5b214cb-ff99-4c1d-b2c8-40fa20e20809", 00:14:31.915 "base_bdev": "nvme0n1", 00:14:31.915 "thin_provision": true, 00:14:31.915 "snapshot": false, 00:14:31.915 "clone": false, 00:14:31.915 "esnap_clone": false 00:14:31.915 } 00:14:31.915 } 00:14:31.915 } 00:14:31.915 ]' 00:14:31.915 06:39:41 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:31.915 06:39:41 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:31.915 06:39:41 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:31.915 06:39:41 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:31.915 06:39:41 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:31.915 06:39:41 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:31.915 06:39:41 -- ftl/common.sh@41 -- # local base_size=5171 00:14:31.915 06:39:41 -- ftl/common.sh@44 -- # local nvc_bdev 00:14:31.915 06:39:41 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:14:31.915 06:39:41 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:14:31.915 06:39:41 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:14:31.915 06:39:41 -- ftl/common.sh@48 -- # get_bdev_size 76bb28fa-941c-4522-801a-74cf776edd8f 00:14:31.915 06:39:41 -- common/autotest_common.sh@1367 -- # local bdev_name=76bb28fa-941c-4522-801a-74cf776edd8f 00:14:31.915 06:39:41 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:31.915 06:39:41 -- common/autotest_common.sh@1369 -- # local bs 00:14:31.915 06:39:41 -- common/autotest_common.sh@1370 -- # local nb 00:14:31.915 06:39:41 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 76bb28fa-941c-4522-801a-74cf776edd8f 00:14:31.915 06:39:41 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:31.915 { 00:14:31.915 "name": "76bb28fa-941c-4522-801a-74cf776edd8f", 00:14:31.915 "aliases": [ 00:14:31.915 "lvs/nvme0n1p0" 00:14:31.915 ], 00:14:31.915 "product_name": "Logical Volume", 00:14:31.915 "block_size": 4096, 00:14:31.915 "num_blocks": 26476544, 00:14:31.915 "uuid": "76bb28fa-941c-4522-801a-74cf776edd8f", 00:14:31.915 "assigned_rate_limits": { 00:14:31.915 "rw_ios_per_sec": 0, 00:14:31.915 "rw_mbytes_per_sec": 0, 00:14:31.915 "r_mbytes_per_sec": 0, 00:14:31.915 "w_mbytes_per_sec": 0 00:14:31.915 }, 00:14:31.915 "claimed": false, 00:14:31.915 "zoned": false, 00:14:31.915 "supported_io_types": { 00:14:31.915 "read": true, 00:14:31.915 "write": true, 00:14:31.915 "unmap": true, 00:14:31.915 "write_zeroes": true, 00:14:31.915 "flush": false, 00:14:31.915 "reset": true, 00:14:31.915 "compare": false, 00:14:31.915 "compare_and_write": false, 00:14:31.915 "abort": false, 00:14:31.915 "nvme_admin": false, 00:14:31.915 "nvme_io": false 00:14:31.915 }, 00:14:31.915 "driver_specific": { 00:14:31.915 "lvol": { 00:14:31.915 "lvol_store_uuid": "b5b214cb-ff99-4c1d-b2c8-40fa20e20809", 00:14:31.915 "base_bdev": "nvme0n1", 00:14:31.915 "thin_provision": true, 00:14:31.915 "snapshot": false, 00:14:31.915 "clone": false, 00:14:31.915 "esnap_clone": false 00:14:31.915 } 00:14:31.915 } 00:14:31.915 } 00:14:31.915 ]' 00:14:31.915 06:39:41 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:31.915 06:39:41 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:31.915 06:39:41 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:31.916 06:39:41 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:31.916 06:39:41 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:31.916 06:39:41 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:31.916 06:39:41 -- ftl/common.sh@48 -- # cache_size=5171 00:14:31.916 06:39:41 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:14:31.916 06:39:42 -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:14:31.916 06:39:42 -- ftl/fio.sh@51 -- # l2p_percentage=60 00:14:31.916 06:39:42 -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:14:31.916 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:14:31.916 06:39:42 -- ftl/fio.sh@56 -- # get_bdev_size 76bb28fa-941c-4522-801a-74cf776edd8f 00:14:31.916 06:39:42 -- common/autotest_common.sh@1367 -- # local bdev_name=76bb28fa-941c-4522-801a-74cf776edd8f 00:14:31.916 06:39:42 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:31.916 06:39:42 -- common/autotest_common.sh@1369 -- # local bs 00:14:31.916 06:39:42 -- common/autotest_common.sh@1370 -- # local nb 00:14:31.916 06:39:42 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 76bb28fa-941c-4522-801a-74cf776edd8f 00:14:31.916 06:39:42 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:31.916 { 00:14:31.916 "name": "76bb28fa-941c-4522-801a-74cf776edd8f", 00:14:31.916 "aliases": [ 00:14:31.916 "lvs/nvme0n1p0" 00:14:31.916 ], 00:14:31.916 "product_name": "Logical Volume", 00:14:31.916 "block_size": 4096, 00:14:31.916 "num_blocks": 26476544, 00:14:31.916 "uuid": "76bb28fa-941c-4522-801a-74cf776edd8f", 00:14:31.916 "assigned_rate_limits": { 00:14:31.916 "rw_ios_per_sec": 0, 00:14:31.916 "rw_mbytes_per_sec": 0, 00:14:31.916 "r_mbytes_per_sec": 0, 00:14:31.916 "w_mbytes_per_sec": 0 00:14:31.916 }, 00:14:31.916 "claimed": false, 00:14:31.916 "zoned": false, 00:14:31.916 "supported_io_types": { 00:14:31.916 "read": true, 00:14:31.916 "write": true, 00:14:31.916 "unmap": true, 00:14:31.916 "write_zeroes": true, 00:14:31.916 "flush": false, 00:14:31.916 "reset": true, 00:14:31.916 "compare": false, 00:14:31.916 "compare_and_write": false, 00:14:31.916 "abort": false, 00:14:31.916 "nvme_admin": false, 00:14:31.916 "nvme_io": false 00:14:31.916 }, 00:14:31.916 "driver_specific": { 00:14:31.916 "lvol": { 00:14:31.916 "lvol_store_uuid": "b5b214cb-ff99-4c1d-b2c8-40fa20e20809", 00:14:31.916 "base_bdev": "nvme0n1", 00:14:31.916 "thin_provision": true, 00:14:31.916 "snapshot": false, 00:14:31.916 "clone": false, 00:14:31.916 "esnap_clone": false 00:14:31.916 } 00:14:31.916 } 00:14:31.916 } 00:14:31.916 ]' 00:14:31.916 06:39:42 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:31.916 06:39:42 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:31.916 06:39:42 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:31.916 06:39:42 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:31.916 06:39:42 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:31.916 06:39:42 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:31.916 06:39:42 -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:14:31.916 06:39:42 -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:14:31.916 06:39:42 -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 76bb28fa-941c-4522-801a-74cf776edd8f -c nvc0n1p0 --l2p_dram_limit 60 00:14:31.916 [2024-11-28 06:39:42.577421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:31.916 [2024-11-28 06:39:42.577560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:14:31.916 [2024-11-28 06:39:42.577578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:14:31.916 [2024-11-28 06:39:42.577587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:31.916 [2024-11-28 06:39:42.577667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:31.916 [2024-11-28 06:39:42.577675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:31.916 [2024-11-28 06:39:42.577686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:14:31.916 [2024-11-28 06:39:42.577692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:31.916 [2024-11-28 06:39:42.577733] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:14:31.916 [2024-11-28 06:39:42.577947] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:14:31.916 [2024-11-28 06:39:42.577967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:31.916 [2024-11-28 06:39:42.577973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:31.916 [2024-11-28 06:39:42.577981] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:14:31.916 [2024-11-28 06:39:42.577987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:31.916 [2024-11-28 06:39:42.578045] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 42d0555a-7194-4fb6-9755-c9594bc4b663 00:14:31.916 [2024-11-28 06:39:42.579020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:31.916 [2024-11-28 06:39:42.579054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:14:31.916 [2024-11-28 06:39:42.579069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:14:31.916 [2024-11-28 06:39:42.579076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:31.916 [2024-11-28 06:39:42.583820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:31.916 [2024-11-28 06:39:42.583935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:31.916 [2024-11-28 06:39:42.583947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.685 ms 00:14:31.916 [2024-11-28 06:39:42.583964] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:31.916 [2024-11-28 06:39:42.584028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:31.916 [2024-11-28 06:39:42.584037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:31.916 [2024-11-28 06:39:42.584043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:14:31.916 [2024-11-28 06:39:42.584049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:31.916 [2024-11-28 06:39:42.584097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:31.916 [2024-11-28 06:39:42.584108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:14:31.916 [2024-11-28 06:39:42.584114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:14:31.916 [2024-11-28 06:39:42.584121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:31.916 [2024-11-28 06:39:42.584148] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:14:31.916 [2024-11-28 06:39:42.585396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:31.916 [2024-11-28 06:39:42.585422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:31.916 [2024-11-28 06:39:42.585431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.254 ms 00:14:31.916 [2024-11-28 06:39:42.585437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:31.916 [2024-11-28 06:39:42.585471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:31.916 [2024-11-28 06:39:42.585478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:14:31.916 [2024-11-28 06:39:42.585496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:14:31.916 [2024-11-28 06:39:42.585502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:31.916 [2024-11-28 06:39:42.585525] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:14:31.916 [2024-11-28 06:39:42.585619] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:14:31.916 [2024-11-28 06:39:42.585632] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:14:31.916 [2024-11-28 06:39:42.585647] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:14:31.916 [2024-11-28 06:39:42.585656] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:14:31.916 [2024-11-28 06:39:42.585663] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:14:31.916 [2024-11-28 06:39:42.585670] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:14:31.916 [2024-11-28 06:39:42.585676] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:14:31.916 [2024-11-28 06:39:42.585683] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:14:31.916 [2024-11-28 06:39:42.585688] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:14:31.916 [2024-11-28 06:39:42.585695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:31.916 [2024-11-28 06:39:42.585700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:14:31.916 [2024-11-28 06:39:42.585718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:14:31.916 [2024-11-28 06:39:42.585724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:31.916 [2024-11-28 06:39:42.585779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:31.916 [2024-11-28 06:39:42.585786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:14:31.916 [2024-11-28 06:39:42.585795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:14:31.916 [2024-11-28 06:39:42.585800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:31.916 [2024-11-28 06:39:42.585890] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:14:31.916 [2024-11-28 06:39:42.585897] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:14:31.916 [2024-11-28 06:39:42.585905] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:31.916 [2024-11-28 06:39:42.585911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:31.916 [2024-11-28 06:39:42.585920] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:14:31.916 [2024-11-28 06:39:42.585925] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:14:31.916 [2024-11-28 06:39:42.585931] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:14:31.916 [2024-11-28 06:39:42.585936] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:14:31.916 [2024-11-28 06:39:42.585942] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:14:31.916 [2024-11-28 06:39:42.585947] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:31.916 [2024-11-28 06:39:42.585953] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:14:31.916 [2024-11-28 06:39:42.585958] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:14:31.917 [2024-11-28 06:39:42.585966] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:31.917 [2024-11-28 06:39:42.585971] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:14:31.917 [2024-11-28 06:39:42.585977] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:14:31.917 [2024-11-28 06:39:42.585983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:31.917 [2024-11-28 06:39:42.585989] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:14:31.917 [2024-11-28 06:39:42.585994] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:14:31.917 [2024-11-28 06:39:42.586003] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:31.917 [2024-11-28 06:39:42.586008] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:14:31.917 [2024-11-28 06:39:42.586015] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:14:31.917 [2024-11-28 06:39:42.586020] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:14:31.917 [2024-11-28 06:39:42.586026] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:14:31.917 [2024-11-28 06:39:42.586032] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:14:31.917 [2024-11-28 06:39:42.586038] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:31.917 [2024-11-28 06:39:42.586043] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:14:31.917 [2024-11-28 06:39:42.586051] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:14:31.917 [2024-11-28 06:39:42.586057] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:31.917 [2024-11-28 06:39:42.586065] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:14:31.917 [2024-11-28 06:39:42.586071] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:14:31.917 [2024-11-28 06:39:42.586078] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:31.917 [2024-11-28 06:39:42.586083] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:14:31.917 [2024-11-28 06:39:42.586090] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:14:31.917 [2024-11-28 06:39:42.586095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:31.917 [2024-11-28 06:39:42.586103] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:14:31.917 [2024-11-28 06:39:42.586108] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:14:31.917 [2024-11-28 06:39:42.586115] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:31.917 [2024-11-28 06:39:42.586121] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:14:31.917 [2024-11-28 06:39:42.586128] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:14:31.917 [2024-11-28 06:39:42.586134] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:31.917 [2024-11-28 06:39:42.586140] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:14:31.917 [2024-11-28 06:39:42.586154] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:14:31.917 [2024-11-28 06:39:42.586162] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:31.917 [2024-11-28 06:39:42.586168] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:31.917 [2024-11-28 06:39:42.586177] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:14:31.917 [2024-11-28 06:39:42.586183] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:14:31.917 [2024-11-28 06:39:42.586190] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:14:31.917 [2024-11-28 06:39:42.586196] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:14:31.917 [2024-11-28 06:39:42.586204] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:14:31.917 [2024-11-28 06:39:42.586209] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:14:31.917 [2024-11-28 06:39:42.586221] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:14:31.917 [2024-11-28 06:39:42.586228] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:31.917 [2024-11-28 06:39:42.586237] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:14:31.917 [2024-11-28 06:39:42.586244] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:14:31.917 [2024-11-28 06:39:42.586251] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:14:31.917 [2024-11-28 06:39:42.586257] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:14:31.917 [2024-11-28 06:39:42.586264] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:14:31.917 [2024-11-28 06:39:42.586270] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:14:31.917 [2024-11-28 06:39:42.586278] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:14:31.917 [2024-11-28 06:39:42.586284] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:14:31.917 [2024-11-28 06:39:42.586293] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:14:31.917 [2024-11-28 06:39:42.586299] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:14:31.917 [2024-11-28 06:39:42.586307] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:14:31.917 [2024-11-28 06:39:42.586313] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:14:31.917 [2024-11-28 06:39:42.586321] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:14:31.917 [2024-11-28 06:39:42.586327] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:14:31.917 [2024-11-28 06:39:42.586336] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:31.917 [2024-11-28 06:39:42.586342] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:14:31.917 [2024-11-28 06:39:42.586350] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:14:31.917 [2024-11-28 06:39:42.586356] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:14:31.917 [2024-11-28 06:39:42.586363] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:14:31.917 [2024-11-28 06:39:42.586370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:31.917 [2024-11-28 06:39:42.586377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:14:31.917 [2024-11-28 06:39:42.586385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:14:31.917 [2024-11-28 06:39:42.586392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:31.917 [2024-11-28 06:39:42.591692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:31.917 [2024-11-28 06:39:42.591729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:31.917 [2024-11-28 06:39:42.591736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.249 ms 00:14:31.918 [2024-11-28 06:39:42.591743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:31.918 [2024-11-28 06:39:42.591816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:31.918 [2024-11-28 06:39:42.591834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:14:31.918 [2024-11-28 06:39:42.591840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:14:31.918 [2024-11-28 06:39:42.591849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:31.918 [2024-11-28 06:39:42.599803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:31.918 [2024-11-28 06:39:42.599832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:31.918 [2024-11-28 06:39:42.599840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.918 ms 00:14:31.918 [2024-11-28 06:39:42.599847] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:31.918 [2024-11-28 06:39:42.599873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:31.918 [2024-11-28 06:39:42.599880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:31.918 [2024-11-28 06:39:42.599888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:14:31.918 [2024-11-28 06:39:42.599894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:31.918 [2024-11-28 06:39:42.600197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:31.918 [2024-11-28 06:39:42.600213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:31.918 [2024-11-28 06:39:42.600220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:14:31.918 [2024-11-28 06:39:42.600227] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:31.918 [2024-11-28 06:39:42.600323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:31.918 [2024-11-28 06:39:42.600332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:31.918 [2024-11-28 06:39:42.600338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:14:31.918 [2024-11-28 06:39:42.600345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:31.918 [2024-11-28 06:39:42.614268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:31.918 [2024-11-28 06:39:42.614483] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:31.918 [2024-11-28 06:39:42.614514] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.895 ms 00:14:31.918 [2024-11-28 06:39:42.614531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:31.918 [2024-11-28 06:39:42.626380] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:14:31.918 [2024-11-28 06:39:42.638354] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:31.918 [2024-11-28 06:39:42.638382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:14:31.918 [2024-11-28 06:39:42.638392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.616 ms 00:14:31.918 [2024-11-28 06:39:42.638398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:32.177 [2024-11-28 06:39:42.683375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:32.177 [2024-11-28 06:39:42.683412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:14:32.177 [2024-11-28 06:39:42.683422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.944 ms 00:14:32.177 [2024-11-28 06:39:42.683429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:32.177 [2024-11-28 06:39:42.683476] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:14:32.177 [2024-11-28 06:39:42.683487] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:14:34.081 [2024-11-28 06:39:44.631120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:34.081 [2024-11-28 06:39:44.631324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:14:34.081 [2024-11-28 06:39:44.631346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1947.624 ms 00:14:34.081 [2024-11-28 06:39:44.631353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.081 [2024-11-28 06:39:44.631517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:34.081 [2024-11-28 06:39:44.631526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:14:34.081 [2024-11-28 06:39:44.631536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:14:34.081 [2024-11-28 06:39:44.631542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.081 [2024-11-28 06:39:44.633946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:34.081 [2024-11-28 06:39:44.633975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:14:34.081 [2024-11-28 06:39:44.633988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.377 ms 00:14:34.081 [2024-11-28 06:39:44.633994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.081 [2024-11-28 06:39:44.635900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:34.081 [2024-11-28 06:39:44.635924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:14:34.081 [2024-11-28 06:39:44.635933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.862 ms 00:14:34.081 [2024-11-28 06:39:44.635938] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.081 [2024-11-28 06:39:44.636081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:34.081 [2024-11-28 06:39:44.636089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:14:34.081 [2024-11-28 06:39:44.636096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:14:34.081 [2024-11-28 06:39:44.636102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.081 [2024-11-28 06:39:44.651021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:34.081 [2024-11-28 06:39:44.651057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:14:34.081 [2024-11-28 06:39:44.651066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.894 ms 00:14:34.081 [2024-11-28 06:39:44.651072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.081 [2024-11-28 06:39:44.654138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:34.081 [2024-11-28 06:39:44.654165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:14:34.081 [2024-11-28 06:39:44.654176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.038 ms 00:14:34.081 [2024-11-28 06:39:44.654182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.081 [2024-11-28 06:39:44.657291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:34.081 [2024-11-28 06:39:44.657403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:14:34.081 [2024-11-28 06:39:44.657418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.078 ms 00:14:34.081 [2024-11-28 06:39:44.657424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.081 [2024-11-28 06:39:44.659901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:34.081 [2024-11-28 06:39:44.659926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:14:34.081 [2024-11-28 06:39:44.659936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.452 ms 00:14:34.081 [2024-11-28 06:39:44.659942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.081 [2024-11-28 06:39:44.659970] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:34.081 [2024-11-28 06:39:44.659985] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:14:34.081 [2024-11-28 06:39:44.659994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:14:34.081 [2024-11-28 06:39:44.659999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.081 [2024-11-28 06:39:44.660053] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:34.081 [2024-11-28 06:39:44.660060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:14:34.081 [2024-11-28 06:39:44.660069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:14:34.081 [2024-11-28 06:39:44.660074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.081 [2024-11-28 06:39:44.660941] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2083.059 ms, result 0 00:14:34.081 { 00:14:34.081 "name": "ftl0", 00:14:34.081 "uuid": "42d0555a-7194-4fb6-9755-c9594bc4b663" 00:14:34.081 } 00:14:34.081 06:39:44 -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:14:34.081 06:39:44 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:14:34.081 06:39:44 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:34.081 06:39:44 -- common/autotest_common.sh@899 -- # local i 00:14:34.081 06:39:44 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:34.081 06:39:44 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:34.082 06:39:44 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:34.341 06:39:44 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:14:34.341 [ 00:14:34.341 { 00:14:34.341 "name": "ftl0", 00:14:34.341 "aliases": [ 00:14:34.341 "42d0555a-7194-4fb6-9755-c9594bc4b663" 00:14:34.341 ], 00:14:34.341 "product_name": "FTL disk", 00:14:34.341 "block_size": 4096, 00:14:34.341 "num_blocks": 20971520, 00:14:34.341 "uuid": "42d0555a-7194-4fb6-9755-c9594bc4b663", 00:14:34.341 "assigned_rate_limits": { 00:14:34.341 "rw_ios_per_sec": 0, 00:14:34.341 "rw_mbytes_per_sec": 0, 00:14:34.341 "r_mbytes_per_sec": 0, 00:14:34.341 "w_mbytes_per_sec": 0 00:14:34.341 }, 00:14:34.341 "claimed": false, 00:14:34.341 "zoned": false, 00:14:34.341 "supported_io_types": { 00:14:34.341 "read": true, 00:14:34.341 "write": true, 00:14:34.341 "unmap": true, 00:14:34.341 "write_zeroes": true, 00:14:34.341 "flush": true, 00:14:34.341 "reset": false, 00:14:34.341 "compare": false, 00:14:34.341 "compare_and_write": false, 00:14:34.341 "abort": false, 00:14:34.341 "nvme_admin": false, 00:14:34.341 "nvme_io": false 00:14:34.341 }, 00:14:34.341 "driver_specific": { 00:14:34.341 "ftl": { 00:14:34.341 "base_bdev": "76bb28fa-941c-4522-801a-74cf776edd8f", 00:14:34.341 "cache": "nvc0n1p0" 00:14:34.341 } 00:14:34.341 } 00:14:34.341 } 00:14:34.341 ] 00:14:34.341 06:39:45 -- common/autotest_common.sh@905 -- # return 0 00:14:34.341 06:39:45 -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:14:34.341 06:39:45 -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:14:34.599 06:39:45 -- ftl/fio.sh@70 -- # echo ']}' 00:14:34.599 06:39:45 -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:14:34.859 [2024-11-28 06:39:45.411274] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:34.859 [2024-11-28 06:39:45.411400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:14:34.859 [2024-11-28 06:39:45.411459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:14:34.859 [2024-11-28 06:39:45.411471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.859 [2024-11-28 06:39:45.411501] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:14:34.859 [2024-11-28 06:39:45.411939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:34.859 [2024-11-28 06:39:45.411953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:14:34.859 [2024-11-28 06:39:45.411963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:14:34.859 [2024-11-28 06:39:45.411970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.859 [2024-11-28 06:39:45.412311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:34.859 [2024-11-28 06:39:45.412319] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:14:34.859 [2024-11-28 06:39:45.412328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:14:34.859 [2024-11-28 06:39:45.412335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.859 [2024-11-28 06:39:45.414869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:34.859 [2024-11-28 06:39:45.414928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:14:34.859 [2024-11-28 06:39:45.414969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.508 ms 00:14:34.859 [2024-11-28 06:39:45.414987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.859 [2024-11-28 06:39:45.419557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:34.859 [2024-11-28 06:39:45.419589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:14:34.859 [2024-11-28 06:39:45.419599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.538 ms 00:14:34.859 [2024-11-28 06:39:45.419612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.859 [2024-11-28 06:39:45.420992] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:34.859 [2024-11-28 06:39:45.421019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:14:34.859 [2024-11-28 06:39:45.421027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.322 ms 00:14:34.859 [2024-11-28 06:39:45.421033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.859 [2024-11-28 06:39:45.424512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:34.859 [2024-11-28 06:39:45.424540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:14:34.859 [2024-11-28 06:39:45.424549] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.445 ms 00:14:34.859 [2024-11-28 06:39:45.424555] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.859 [2024-11-28 06:39:45.424683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:34.859 [2024-11-28 06:39:45.424691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:14:34.859 [2024-11-28 06:39:45.424700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:14:34.859 [2024-11-28 06:39:45.424731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.859 [2024-11-28 06:39:45.426160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:34.859 [2024-11-28 06:39:45.426187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:14:34.859 [2024-11-28 06:39:45.426195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.405 ms 00:14:34.859 [2024-11-28 06:39:45.426200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.859 [2024-11-28 06:39:45.427076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:34.859 [2024-11-28 06:39:45.427164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:14:34.859 [2024-11-28 06:39:45.427179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.842 ms 00:14:34.859 [2024-11-28 06:39:45.427184] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.859 [2024-11-28 06:39:45.427897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:34.859 [2024-11-28 06:39:45.427919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:14:34.859 [2024-11-28 06:39:45.427927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:14:34.859 [2024-11-28 06:39:45.427932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.859 [2024-11-28 06:39:45.428790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:34.859 [2024-11-28 06:39:45.428816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:14:34.859 [2024-11-28 06:39:45.428824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.788 ms 00:14:34.859 [2024-11-28 06:39:45.428829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.859 [2024-11-28 06:39:45.428861] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:14:34.859 [2024-11-28 06:39:45.428872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:14:34.859 [2024-11-28 06:39:45.428881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:14:34.859 [2024-11-28 06:39:45.428887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:14:34.859 [2024-11-28 06:39:45.428895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:14:34.859 [2024-11-28 06:39:45.428901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:14:34.859 [2024-11-28 06:39:45.428908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:14:34.859 [2024-11-28 06:39:45.428913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:14:34.859 [2024-11-28 06:39:45.428921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:14:34.859 [2024-11-28 06:39:45.428927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:14:34.859 [2024-11-28 06:39:45.428934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:14:34.859 [2024-11-28 06:39:45.428940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:14:34.859 [2024-11-28 06:39:45.428948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:14:34.859 [2024-11-28 06:39:45.428954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:14:34.859 [2024-11-28 06:39:45.428961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:14:34.859 [2024-11-28 06:39:45.428967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:14:34.859 [2024-11-28 06:39:45.428974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:14:34.859 [2024-11-28 06:39:45.428979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:14:34.859 [2024-11-28 06:39:45.428986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:14:34.859 [2024-11-28 06:39:45.428992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:14:34.859 [2024-11-28 06:39:45.428999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:14:34.859 [2024-11-28 06:39:45.429005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:14:34.859 [2024-11-28 06:39:45.429012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:14:34.859 [2024-11-28 06:39:45.429018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:14:34.859 [2024-11-28 06:39:45.429026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:14:34.859 [2024-11-28 06:39:45.429031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:14:34.859 [2024-11-28 06:39:45.429038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:14:34.860 [2024-11-28 06:39:45.429555] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:14:34.860 [2024-11-28 06:39:45.429563] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 42d0555a-7194-4fb6-9755-c9594bc4b663 00:14:34.860 [2024-11-28 06:39:45.429569] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:14:34.860 [2024-11-28 06:39:45.429576] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:14:34.860 [2024-11-28 06:39:45.429582] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:14:34.860 [2024-11-28 06:39:45.429588] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:14:34.860 [2024-11-28 06:39:45.429594] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:14:34.860 [2024-11-28 06:39:45.429604] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:14:34.860 [2024-11-28 06:39:45.429609] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:14:34.860 [2024-11-28 06:39:45.429615] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:14:34.860 [2024-11-28 06:39:45.429620] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:14:34.860 [2024-11-28 06:39:45.429626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:34.860 [2024-11-28 06:39:45.429632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:14:34.860 [2024-11-28 06:39:45.429639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.766 ms 00:14:34.860 [2024-11-28 06:39:45.429644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.860 [2024-11-28 06:39:45.430935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:34.860 [2024-11-28 06:39:45.430952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:14:34.861 [2024-11-28 06:39:45.430959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.269 ms 00:14:34.861 [2024-11-28 06:39:45.430967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.861 [2024-11-28 06:39:45.431018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:34.861 [2024-11-28 06:39:45.431024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:14:34.861 [2024-11-28 06:39:45.431031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:14:34.861 [2024-11-28 06:39:45.431037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.861 [2024-11-28 06:39:45.435523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:34.861 [2024-11-28 06:39:45.435548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:34.861 [2024-11-28 06:39:45.435559] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:34.861 [2024-11-28 06:39:45.435564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.861 [2024-11-28 06:39:45.435616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:34.861 [2024-11-28 06:39:45.435623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:34.861 [2024-11-28 06:39:45.435630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:34.861 [2024-11-28 06:39:45.435635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.861 [2024-11-28 06:39:45.435700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:34.861 [2024-11-28 06:39:45.435718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:34.861 [2024-11-28 06:39:45.435726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:34.861 [2024-11-28 06:39:45.435732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.861 [2024-11-28 06:39:45.435756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:34.861 [2024-11-28 06:39:45.435762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:34.861 [2024-11-28 06:39:45.435769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:34.861 [2024-11-28 06:39:45.435775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.861 [2024-11-28 06:39:45.443875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:34.861 [2024-11-28 06:39:45.443914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:34.861 [2024-11-28 06:39:45.443925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:34.861 [2024-11-28 06:39:45.443931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.861 [2024-11-28 06:39:45.447088] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:34.861 [2024-11-28 06:39:45.447116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:34.861 [2024-11-28 06:39:45.447125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:34.861 [2024-11-28 06:39:45.447131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.861 [2024-11-28 06:39:45.447170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:34.861 [2024-11-28 06:39:45.447179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:34.861 [2024-11-28 06:39:45.447195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:34.861 [2024-11-28 06:39:45.447200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.861 [2024-11-28 06:39:45.447247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:34.861 [2024-11-28 06:39:45.447253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:34.861 [2024-11-28 06:39:45.447260] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:34.861 [2024-11-28 06:39:45.447266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.861 [2024-11-28 06:39:45.447331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:34.861 [2024-11-28 06:39:45.447338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:34.861 [2024-11-28 06:39:45.447345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:34.861 [2024-11-28 06:39:45.447351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.861 [2024-11-28 06:39:45.447386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:34.861 [2024-11-28 06:39:45.447403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:14:34.861 [2024-11-28 06:39:45.447410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:34.861 [2024-11-28 06:39:45.447416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.861 [2024-11-28 06:39:45.447456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:34.861 [2024-11-28 06:39:45.447463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:34.861 [2024-11-28 06:39:45.447470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:34.861 [2024-11-28 06:39:45.447483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.861 [2024-11-28 06:39:45.447523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:34.861 [2024-11-28 06:39:45.447530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:34.861 [2024-11-28 06:39:45.447537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:34.861 [2024-11-28 06:39:45.447543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:34.861 [2024-11-28 06:39:45.447657] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 36.358 ms, result 0 00:14:34.861 true 00:14:34.861 06:39:45 -- ftl/fio.sh@75 -- # killprocess 81738 00:14:34.861 06:39:45 -- common/autotest_common.sh@936 -- # '[' -z 81738 ']' 00:14:34.861 06:39:45 -- common/autotest_common.sh@940 -- # kill -0 81738 00:14:34.861 06:39:45 -- common/autotest_common.sh@941 -- # uname 00:14:34.861 06:39:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:34.861 06:39:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81738 00:14:34.861 killing process with pid 81738 00:14:34.861 06:39:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:34.861 06:39:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:34.861 06:39:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81738' 00:14:34.861 06:39:45 -- common/autotest_common.sh@955 -- # kill 81738 00:14:34.861 06:39:45 -- common/autotest_common.sh@960 -- # wait 81738 00:14:40.131 06:39:50 -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:14:40.131 06:39:50 -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:40.131 06:39:50 -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:14:40.131 06:39:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:40.131 06:39:50 -- common/autotest_common.sh@10 -- # set +x 00:14:40.131 06:39:50 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:40.131 06:39:50 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:40.131 06:39:50 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:40.131 06:39:50 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:40.131 06:39:50 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:40.131 06:39:50 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:40.131 06:39:50 -- common/autotest_common.sh@1330 -- # shift 00:14:40.131 06:39:50 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:40.131 06:39:50 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:40.131 06:39:50 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:40.131 06:39:50 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:40.131 06:39:50 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:40.131 06:39:50 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:40.131 06:39:50 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:40.131 06:39:50 -- common/autotest_common.sh@1336 -- # break 00:14:40.131 06:39:50 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:40.131 06:39:50 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:40.131 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:14:40.131 fio-3.35 00:14:40.131 Starting 1 thread 00:14:44.322 00:14:44.322 test: (groupid=0, jobs=1): err= 0: pid=81913: Thu Nov 28 06:39:54 2024 00:14:44.322 read: IOPS=1141, BW=75.8MiB/s (79.5MB/s)(255MiB/3357msec) 00:14:44.322 slat (usec): min=2, max=110, avg= 4.55, stdev= 2.83 00:14:44.322 clat (usec): min=234, max=1176, avg=392.49, stdev=118.48 00:14:44.322 lat (usec): min=238, max=1180, avg=397.04, stdev=118.98 00:14:44.322 clat percentiles (usec): 00:14:44.322 | 1.00th=[ 265], 5.00th=[ 281], 10.00th=[ 285], 20.00th=[ 306], 00:14:44.322 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 326], 60.00th=[ 392], 00:14:44.322 | 70.00th=[ 457], 80.00th=[ 502], 90.00th=[ 529], 95.00th=[ 570], 00:14:44.322 | 99.00th=[ 832], 99.50th=[ 898], 99.90th=[ 1074], 99.95th=[ 1090], 00:14:44.322 | 99.99th=[ 1172] 00:14:44.322 write: IOPS=1149, BW=76.3MiB/s (80.1MB/s)(256MiB/3354msec); 0 zone resets 00:14:44.322 slat (nsec): min=13545, max=87439, avg=18975.83, stdev=4667.48 00:14:44.322 clat (usec): min=261, max=1927, avg=443.95, stdev=162.80 00:14:44.322 lat (usec): min=281, max=1945, avg=462.93, stdev=163.70 00:14:44.322 clat percentiles (usec): 00:14:44.322 | 1.00th=[ 289], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 330], 00:14:44.322 | 30.00th=[ 334], 40.00th=[ 343], 50.00th=[ 355], 60.00th=[ 461], 00:14:44.322 | 70.00th=[ 529], 80.00th=[ 570], 90.00th=[ 603], 95.00th=[ 701], 00:14:44.322 | 99.00th=[ 979], 99.50th=[ 1221], 99.90th=[ 1680], 99.95th=[ 1795], 00:14:44.322 | 99.99th=[ 1926] 00:14:44.322 bw ( KiB/s): min=59041, max=97104, per=96.73%, avg=75618.83, stdev=15266.15, samples=6 00:14:44.322 iops : min= 868, max= 1428, avg=1112.00, stdev=224.56, samples=6 00:14:44.322 lat (usec) : 250=0.14%, 500=72.10%, 750=24.59%, 1000=2.63% 00:14:44.322 lat (msec) : 2=0.53% 00:14:44.322 cpu : usr=98.75%, sys=0.42%, ctx=11, majf=0, minf=1326 00:14:44.322 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:44.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:44.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:44.322 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:44.322 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:44.322 00:14:44.322 Run status group 0 (all jobs): 00:14:44.322 READ: bw=75.8MiB/s (79.5MB/s), 75.8MiB/s-75.8MiB/s (79.5MB/s-79.5MB/s), io=255MiB (267MB), run=3357-3357msec 00:14:44.322 WRITE: bw=76.3MiB/s (80.1MB/s), 76.3MiB/s-76.3MiB/s (80.1MB/s-80.1MB/s), io=256MiB (269MB), run=3354-3354msec 00:14:44.582 ----------------------------------------------------- 00:14:44.582 Suppressions used: 00:14:44.582 count bytes template 00:14:44.582 1 5 /usr/src/fio/parse.c 00:14:44.582 1 8 libtcmalloc_minimal.so 00:14:44.582 1 904 libcrypto.so 00:14:44.582 ----------------------------------------------------- 00:14:44.582 00:14:44.582 06:39:55 -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:14:44.582 06:39:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:44.582 06:39:55 -- common/autotest_common.sh@10 -- # set +x 00:14:44.582 06:39:55 -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:44.582 06:39:55 -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:14:44.582 06:39:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:44.582 06:39:55 -- common/autotest_common.sh@10 -- # set +x 00:14:44.582 06:39:55 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:44.582 06:39:55 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:44.582 06:39:55 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:44.582 06:39:55 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:44.582 06:39:55 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:44.582 06:39:55 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:44.582 06:39:55 -- common/autotest_common.sh@1330 -- # shift 00:14:44.582 06:39:55 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:44.582 06:39:55 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:44.582 06:39:55 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:44.582 06:39:55 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:44.582 06:39:55 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:44.582 06:39:55 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:44.582 06:39:55 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:44.582 06:39:55 -- common/autotest_common.sh@1336 -- # break 00:14:44.582 06:39:55 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:44.582 06:39:55 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:44.841 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:14:44.841 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:14:44.841 fio-3.35 00:14:44.841 Starting 2 threads 00:15:06.770 00:15:06.770 first_half: (groupid=0, jobs=1): err= 0: pid=81999: Thu Nov 28 06:40:16 2024 00:15:06.770 read: IOPS=3268, BW=12.8MiB/s (13.4MB/s)(256MiB/20032msec) 00:15:06.770 slat (nsec): min=2972, max=53718, avg=3720.72, stdev=631.13 00:15:06.770 clat (usec): min=462, max=253530, avg=33553.19, stdev=20451.34 00:15:06.770 lat (usec): min=465, max=253536, avg=33556.91, stdev=20451.43 00:15:06.770 clat percentiles (msec): 00:15:06.770 | 1.00th=[ 7], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 28], 00:15:06.770 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 29], 60.00th=[ 29], 00:15:06.770 | 70.00th=[ 32], 80.00th=[ 33], 90.00th=[ 37], 95.00th=[ 65], 00:15:06.770 | 99.00th=[ 140], 99.50th=[ 146], 99.90th=[ 197], 99.95th=[ 232], 00:15:06.770 | 99.99th=[ 251] 00:15:06.770 write: IOPS=3276, BW=12.8MiB/s (13.4MB/s)(256MiB/19999msec); 0 zone resets 00:15:06.770 slat (usec): min=3, max=304, avg= 5.04, stdev= 2.60 00:15:06.770 clat (usec): min=343, max=33393, avg=5577.92, stdev=5319.93 00:15:06.770 lat (usec): min=348, max=33398, avg=5582.96, stdev=5320.00 00:15:06.770 clat percentiles (usec): 00:15:06.770 | 1.00th=[ 660], 5.00th=[ 824], 10.00th=[ 1139], 20.00th=[ 2507], 00:15:06.770 | 30.00th=[ 3097], 40.00th=[ 3851], 50.00th=[ 4555], 60.00th=[ 5014], 00:15:06.770 | 70.00th=[ 5407], 80.00th=[ 6587], 90.00th=[ 9896], 95.00th=[16450], 00:15:06.770 | 99.00th=[28705], 99.50th=[29754], 99.90th=[31851], 99.95th=[32375], 00:15:06.770 | 99.99th=[32900] 00:15:06.770 bw ( KiB/s): min= 2704, max=55640, per=100.00%, avg=27394.11, stdev=14581.75, samples=19 00:15:06.770 iops : min= 676, max=13910, avg=6848.53, stdev=3645.44, samples=19 00:15:06.770 lat (usec) : 500=0.08%, 750=1.55%, 1000=2.27% 00:15:06.770 lat (msec) : 2=3.68%, 4=13.53%, 10=25.36%, 20=2.89%, 50=47.57% 00:15:06.770 lat (msec) : 100=1.54%, 250=1.53%, 500=0.01% 00:15:06.770 cpu : usr=99.52%, sys=0.09%, ctx=38, majf=0, minf=5595 00:15:06.770 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:06.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.770 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:06.770 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:06.770 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:06.770 second_half: (groupid=0, jobs=1): err= 0: pid=82000: Thu Nov 28 06:40:16 2024 00:15:06.770 read: IOPS=3292, BW=12.9MiB/s (13.5MB/s)(256MiB/19892msec) 00:15:06.770 slat (nsec): min=3014, max=19799, avg=3854.16, stdev=855.01 00:15:06.770 clat (msec): min=8, max=178, avg=33.80, stdev=18.16 00:15:06.770 lat (msec): min=8, max=178, avg=33.80, stdev=18.16 00:15:06.770 clat percentiles (msec): 00:15:06.770 | 1.00th=[ 25], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:15:06.770 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 29], 60.00th=[ 29], 00:15:06.770 | 70.00th=[ 32], 80.00th=[ 33], 90.00th=[ 37], 95.00th=[ 60], 00:15:06.770 | 99.00th=[ 132], 99.50th=[ 142], 99.90th=[ 157], 99.95th=[ 161], 00:15:06.770 | 99.99th=[ 165] 00:15:06.770 write: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(256MiB/18486msec); 0 zone resets 00:15:06.770 slat (usec): min=3, max=382, avg= 5.24, stdev= 2.88 00:15:06.770 clat (usec): min=308, max=36157, avg=5055.44, stdev=3256.67 00:15:06.770 lat (usec): min=314, max=36161, avg=5060.68, stdev=3257.02 00:15:06.770 clat percentiles (usec): 00:15:06.770 | 1.00th=[ 783], 5.00th=[ 1532], 10.00th=[ 2245], 20.00th=[ 2769], 00:15:06.770 | 30.00th=[ 3326], 40.00th=[ 3949], 50.00th=[ 4555], 60.00th=[ 4948], 00:15:06.770 | 70.00th=[ 5276], 80.00th=[ 6194], 90.00th=[ 9503], 95.00th=[10290], 00:15:06.770 | 99.00th=[16450], 99.50th=[23725], 99.90th=[31589], 99.95th=[31851], 00:15:06.770 | 99.99th=[34866] 00:15:06.770 bw ( KiB/s): min= 256, max=47808, per=100.00%, avg=27594.11, stdev=17313.11, samples=19 00:15:06.770 iops : min= 64, max=11952, avg=6898.53, stdev=4328.28, samples=19 00:15:06.770 lat (usec) : 500=0.04%, 750=0.34%, 1000=0.94% 00:15:06.770 lat (msec) : 2=2.43%, 4=16.65%, 10=26.47%, 20=2.90%, 50=47.18% 00:15:06.770 lat (msec) : 100=1.65%, 250=1.40% 00:15:06.770 cpu : usr=99.49%, sys=0.12%, ctx=55, majf=0, minf=5547 00:15:06.770 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:06.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.770 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:06.770 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:06.770 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:06.770 00:15:06.770 Run status group 0 (all jobs): 00:15:06.770 READ: bw=25.5MiB/s (26.8MB/s), 12.8MiB/s-12.9MiB/s (13.4MB/s-13.5MB/s), io=512MiB (536MB), run=19892-20032msec 00:15:06.770 WRITE: bw=25.6MiB/s (26.8MB/s), 12.8MiB/s-13.8MiB/s (13.4MB/s-14.5MB/s), io=512MiB (537MB), run=18486-19999msec 00:15:07.030 ----------------------------------------------------- 00:15:07.030 Suppressions used: 00:15:07.030 count bytes template 00:15:07.030 2 10 /usr/src/fio/parse.c 00:15:07.030 3 288 /usr/src/fio/iolog.c 00:15:07.030 1 8 libtcmalloc_minimal.so 00:15:07.030 1 904 libcrypto.so 00:15:07.030 ----------------------------------------------------- 00:15:07.030 00:15:07.030 06:40:17 -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:15:07.030 06:40:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:07.030 06:40:17 -- common/autotest_common.sh@10 -- # set +x 00:15:07.030 06:40:17 -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:07.030 06:40:17 -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:15:07.030 06:40:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:07.030 06:40:17 -- common/autotest_common.sh@10 -- # set +x 00:15:07.030 06:40:17 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:07.030 06:40:17 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:07.030 06:40:17 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:07.030 06:40:17 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:07.030 06:40:17 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:07.030 06:40:17 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:07.030 06:40:17 -- common/autotest_common.sh@1330 -- # shift 00:15:07.030 06:40:17 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:07.030 06:40:17 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:07.030 06:40:17 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:07.030 06:40:17 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:07.030 06:40:17 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:07.289 06:40:17 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:07.289 06:40:17 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:07.289 06:40:17 -- common/autotest_common.sh@1336 -- # break 00:15:07.289 06:40:17 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:07.289 06:40:17 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:07.289 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:07.289 fio-3.35 00:15:07.289 Starting 1 thread 00:15:19.492 00:15:19.492 test: (groupid=0, jobs=1): err= 0: pid=82265: Thu Nov 28 06:40:30 2024 00:15:19.492 read: IOPS=8553, BW=33.4MiB/s (35.0MB/s)(255MiB/7623msec) 00:15:19.492 slat (nsec): min=2987, max=17736, avg=3420.20, stdev=500.20 00:15:19.492 clat (usec): min=464, max=29958, avg=14956.51, stdev=1694.86 00:15:19.492 lat (usec): min=470, max=29961, avg=14959.93, stdev=1694.88 00:15:19.492 clat percentiles (usec): 00:15:19.492 | 1.00th=[13960], 5.00th=[14091], 10.00th=[14222], 20.00th=[14222], 00:15:19.492 | 30.00th=[14353], 40.00th=[14484], 50.00th=[14615], 60.00th=[14615], 00:15:19.492 | 70.00th=[14746], 80.00th=[14877], 90.00th=[15270], 95.00th=[18744], 00:15:19.492 | 99.00th=[23200], 99.50th=[23725], 99.90th=[27132], 99.95th=[27919], 00:15:19.492 | 99.99th=[29230] 00:15:19.492 write: IOPS=17.1k, BW=66.9MiB/s (70.1MB/s)(256MiB/3828msec); 0 zone resets 00:15:19.492 slat (usec): min=3, max=171, avg= 5.87, stdev= 2.18 00:15:19.492 clat (usec): min=450, max=46043, avg=7435.61, stdev=9250.44 00:15:19.492 lat (usec): min=456, max=46048, avg=7441.48, stdev=9250.40 00:15:19.492 clat percentiles (usec): 00:15:19.492 | 1.00th=[ 594], 5.00th=[ 668], 10.00th=[ 766], 20.00th=[ 906], 00:15:19.492 | 30.00th=[ 1020], 40.00th=[ 1352], 50.00th=[ 5014], 60.00th=[ 5735], 00:15:19.492 | 70.00th=[ 6718], 80.00th=[ 8160], 90.00th=[26870], 95.00th=[28181], 00:15:19.492 | 99.00th=[32113], 99.50th=[34866], 99.90th=[43254], 99.95th=[43779], 00:15:19.492 | 99.99th=[45351] 00:15:19.492 bw ( KiB/s): min=39640, max=86568, per=95.70%, avg=65536.00, stdev=14977.73, samples=8 00:15:19.492 iops : min= 9910, max=21642, avg=16384.00, stdev=3744.43, samples=8 00:15:19.492 lat (usec) : 500=0.01%, 750=4.66%, 1000=9.61% 00:15:19.492 lat (msec) : 2=6.35%, 4=0.64%, 10=20.81%, 20=48.13%, 50=9.80% 00:15:19.492 cpu : usr=99.41%, sys=0.15%, ctx=22, majf=0, minf=5577 00:15:19.492 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:19.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:19.492 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:19.492 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:19.492 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:19.492 00:15:19.492 Run status group 0 (all jobs): 00:15:19.492 READ: bw=33.4MiB/s (35.0MB/s), 33.4MiB/s-33.4MiB/s (35.0MB/s-35.0MB/s), io=255MiB (267MB), run=7623-7623msec 00:15:19.492 WRITE: bw=66.9MiB/s (70.1MB/s), 66.9MiB/s-66.9MiB/s (70.1MB/s-70.1MB/s), io=256MiB (268MB), run=3828-3828msec 00:15:20.058 ----------------------------------------------------- 00:15:20.058 Suppressions used: 00:15:20.058 count bytes template 00:15:20.058 1 5 /usr/src/fio/parse.c 00:15:20.058 2 192 /usr/src/fio/iolog.c 00:15:20.058 1 8 libtcmalloc_minimal.so 00:15:20.058 1 904 libcrypto.so 00:15:20.058 ----------------------------------------------------- 00:15:20.058 00:15:20.058 06:40:30 -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:15:20.058 06:40:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:20.058 06:40:30 -- common/autotest_common.sh@10 -- # set +x 00:15:20.058 06:40:30 -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:20.058 Remove shared memory files 00:15:20.058 06:40:30 -- ftl/fio.sh@85 -- # remove_shm 00:15:20.058 06:40:30 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:15:20.058 06:40:30 -- ftl/common.sh@205 -- # rm -f rm -f 00:15:20.058 06:40:30 -- ftl/common.sh@206 -- # rm -f rm -f 00:15:20.058 06:40:30 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid68241 /dev/shm/spdk_tgt_trace.pid80680 00:15:20.058 06:40:30 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:15:20.058 06:40:30 -- ftl/common.sh@209 -- # rm -f rm -f 00:15:20.058 ************************************ 00:15:20.058 END TEST ftl_fio_basic 00:15:20.058 ************************************ 00:15:20.058 00:15:20.058 real 0m51.675s 00:15:20.058 user 1m55.496s 00:15:20.058 sys 0m2.292s 00:15:20.058 06:40:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:20.058 06:40:30 -- common/autotest_common.sh@10 -- # set +x 00:15:20.058 06:40:30 -- ftl/ftl.sh@75 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:15:20.058 06:40:30 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:20.058 06:40:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:20.058 06:40:30 -- common/autotest_common.sh@10 -- # set +x 00:15:20.058 ************************************ 00:15:20.058 START TEST ftl_bdevperf 00:15:20.058 ************************************ 00:15:20.058 06:40:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:15:20.316 * Looking for test storage... 00:15:20.316 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:20.316 06:40:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:20.316 06:40:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:20.316 06:40:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:20.316 06:40:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:20.316 06:40:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:20.316 06:40:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:20.316 06:40:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:20.316 06:40:30 -- scripts/common.sh@335 -- # IFS=.-: 00:15:20.316 06:40:30 -- scripts/common.sh@335 -- # read -ra ver1 00:15:20.316 06:40:30 -- scripts/common.sh@336 -- # IFS=.-: 00:15:20.316 06:40:30 -- scripts/common.sh@336 -- # read -ra ver2 00:15:20.316 06:40:30 -- scripts/common.sh@337 -- # local 'op=<' 00:15:20.316 06:40:30 -- scripts/common.sh@339 -- # ver1_l=2 00:15:20.316 06:40:30 -- scripts/common.sh@340 -- # ver2_l=1 00:15:20.316 06:40:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:20.316 06:40:30 -- scripts/common.sh@343 -- # case "$op" in 00:15:20.316 06:40:30 -- scripts/common.sh@344 -- # : 1 00:15:20.316 06:40:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:20.316 06:40:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:20.316 06:40:30 -- scripts/common.sh@364 -- # decimal 1 00:15:20.316 06:40:30 -- scripts/common.sh@352 -- # local d=1 00:15:20.316 06:40:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:20.316 06:40:30 -- scripts/common.sh@354 -- # echo 1 00:15:20.316 06:40:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:20.316 06:40:30 -- scripts/common.sh@365 -- # decimal 2 00:15:20.316 06:40:30 -- scripts/common.sh@352 -- # local d=2 00:15:20.316 06:40:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:20.316 06:40:30 -- scripts/common.sh@354 -- # echo 2 00:15:20.316 06:40:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:20.316 06:40:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:20.316 06:40:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:20.316 06:40:30 -- scripts/common.sh@367 -- # return 0 00:15:20.316 06:40:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:20.316 06:40:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:20.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.316 --rc genhtml_branch_coverage=1 00:15:20.316 --rc genhtml_function_coverage=1 00:15:20.316 --rc genhtml_legend=1 00:15:20.316 --rc geninfo_all_blocks=1 00:15:20.316 --rc geninfo_unexecuted_blocks=1 00:15:20.316 00:15:20.316 ' 00:15:20.316 06:40:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:20.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.316 --rc genhtml_branch_coverage=1 00:15:20.316 --rc genhtml_function_coverage=1 00:15:20.316 --rc genhtml_legend=1 00:15:20.316 --rc geninfo_all_blocks=1 00:15:20.316 --rc geninfo_unexecuted_blocks=1 00:15:20.316 00:15:20.316 ' 00:15:20.316 06:40:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:20.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.316 --rc genhtml_branch_coverage=1 00:15:20.316 --rc genhtml_function_coverage=1 00:15:20.316 --rc genhtml_legend=1 00:15:20.316 --rc geninfo_all_blocks=1 00:15:20.316 --rc geninfo_unexecuted_blocks=1 00:15:20.316 00:15:20.316 ' 00:15:20.316 06:40:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:20.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.316 --rc genhtml_branch_coverage=1 00:15:20.316 --rc genhtml_function_coverage=1 00:15:20.316 --rc genhtml_legend=1 00:15:20.316 --rc geninfo_all_blocks=1 00:15:20.316 --rc geninfo_unexecuted_blocks=1 00:15:20.316 00:15:20.316 ' 00:15:20.316 06:40:30 -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:20.316 06:40:30 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:15:20.316 06:40:30 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:20.316 06:40:30 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:20.316 06:40:30 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:20.316 06:40:30 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:20.316 06:40:30 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:20.316 06:40:30 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:20.316 06:40:30 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:20.316 06:40:30 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:20.316 06:40:30 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:20.316 06:40:30 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:20.316 06:40:30 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:20.316 06:40:30 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:20.316 06:40:30 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:20.316 06:40:30 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:20.316 06:40:30 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:20.316 06:40:30 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:20.316 06:40:30 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:20.316 06:40:30 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:20.316 06:40:30 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:20.316 06:40:30 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:20.316 06:40:30 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:20.316 06:40:30 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:20.316 06:40:30 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:20.316 06:40:30 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:20.316 06:40:30 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:20.316 06:40:30 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:20.316 06:40:30 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:20.316 06:40:30 -- ftl/bdevperf.sh@11 -- # device=0000:00:07.0 00:15:20.317 06:40:30 -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:06.0 00:15:20.317 06:40:30 -- ftl/bdevperf.sh@13 -- # use_append= 00:15:20.317 06:40:30 -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:20.317 06:40:30 -- ftl/bdevperf.sh@15 -- # timeout=240 00:15:20.317 06:40:30 -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:15:20.317 06:40:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:20.317 06:40:30 -- common/autotest_common.sh@10 -- # set +x 00:15:20.317 06:40:30 -- ftl/bdevperf.sh@19 -- # bdevperf_pid=82476 00:15:20.317 06:40:30 -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:15:20.317 06:40:30 -- ftl/bdevperf.sh@22 -- # waitforlisten 82476 00:15:20.317 06:40:30 -- common/autotest_common.sh@829 -- # '[' -z 82476 ']' 00:15:20.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.317 06:40:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.317 06:40:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:20.317 06:40:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.317 06:40:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:20.317 06:40:30 -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:15:20.317 06:40:30 -- common/autotest_common.sh@10 -- # set +x 00:15:20.317 [2024-11-28 06:40:31.049956] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:20.317 [2024-11-28 06:40:31.050266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82476 ] 00:15:20.575 [2024-11-28 06:40:31.185720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.575 [2024-11-28 06:40:31.216612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.141 06:40:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:21.141 06:40:31 -- common/autotest_common.sh@862 -- # return 0 00:15:21.141 06:40:31 -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:15:21.141 06:40:31 -- ftl/common.sh@54 -- # local name=nvme0 00:15:21.141 06:40:31 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:15:21.141 06:40:31 -- ftl/common.sh@56 -- # local size=103424 00:15:21.141 06:40:31 -- ftl/common.sh@59 -- # local base_bdev 00:15:21.141 06:40:31 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:15:21.399 06:40:32 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:21.399 06:40:32 -- ftl/common.sh@62 -- # local base_size 00:15:21.399 06:40:32 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:21.399 06:40:32 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:15:21.399 06:40:32 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:21.399 06:40:32 -- common/autotest_common.sh@1369 -- # local bs 00:15:21.399 06:40:32 -- common/autotest_common.sh@1370 -- # local nb 00:15:21.399 06:40:32 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:21.658 06:40:32 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:21.658 { 00:15:21.658 "name": "nvme0n1", 00:15:21.658 "aliases": [ 00:15:21.658 "f3a4f039-d008-42ee-9a4a-88fb7471e193" 00:15:21.658 ], 00:15:21.658 "product_name": "NVMe disk", 00:15:21.658 "block_size": 4096, 00:15:21.658 "num_blocks": 1310720, 00:15:21.658 "uuid": "f3a4f039-d008-42ee-9a4a-88fb7471e193", 00:15:21.658 "assigned_rate_limits": { 00:15:21.658 "rw_ios_per_sec": 0, 00:15:21.658 "rw_mbytes_per_sec": 0, 00:15:21.658 "r_mbytes_per_sec": 0, 00:15:21.658 "w_mbytes_per_sec": 0 00:15:21.658 }, 00:15:21.658 "claimed": true, 00:15:21.658 "claim_type": "read_many_write_one", 00:15:21.658 "zoned": false, 00:15:21.658 "supported_io_types": { 00:15:21.658 "read": true, 00:15:21.658 "write": true, 00:15:21.658 "unmap": true, 00:15:21.658 "write_zeroes": true, 00:15:21.658 "flush": true, 00:15:21.658 "reset": true, 00:15:21.658 "compare": true, 00:15:21.658 "compare_and_write": false, 00:15:21.658 "abort": true, 00:15:21.658 "nvme_admin": true, 00:15:21.658 "nvme_io": true 00:15:21.658 }, 00:15:21.658 "driver_specific": { 00:15:21.658 "nvme": [ 00:15:21.658 { 00:15:21.658 "pci_address": "0000:00:07.0", 00:15:21.658 "trid": { 00:15:21.658 "trtype": "PCIe", 00:15:21.658 "traddr": "0000:00:07.0" 00:15:21.658 }, 00:15:21.658 "ctrlr_data": { 00:15:21.658 "cntlid": 0, 00:15:21.658 "vendor_id": "0x1b36", 00:15:21.658 "model_number": "QEMU NVMe Ctrl", 00:15:21.658 "serial_number": "12341", 00:15:21.658 "firmware_revision": "8.0.0", 00:15:21.658 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:21.658 "oacs": { 00:15:21.658 "security": 0, 00:15:21.658 "format": 1, 00:15:21.658 "firmware": 0, 00:15:21.658 "ns_manage": 1 00:15:21.658 }, 00:15:21.658 "multi_ctrlr": false, 00:15:21.658 "ana_reporting": false 00:15:21.658 }, 00:15:21.658 "vs": { 00:15:21.658 "nvme_version": "1.4" 00:15:21.658 }, 00:15:21.658 "ns_data": { 00:15:21.658 "id": 1, 00:15:21.658 "can_share": false 00:15:21.658 } 00:15:21.658 } 00:15:21.658 ], 00:15:21.658 "mp_policy": "active_passive" 00:15:21.658 } 00:15:21.658 } 00:15:21.658 ]' 00:15:21.658 06:40:32 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:21.658 06:40:32 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:21.658 06:40:32 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:21.658 06:40:32 -- common/autotest_common.sh@1373 -- # nb=1310720 00:15:21.658 06:40:32 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:15:21.658 06:40:32 -- common/autotest_common.sh@1377 -- # echo 5120 00:15:21.658 06:40:32 -- ftl/common.sh@63 -- # base_size=5120 00:15:21.658 06:40:32 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:21.658 06:40:32 -- ftl/common.sh@67 -- # clear_lvols 00:15:21.658 06:40:32 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:21.658 06:40:32 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:21.917 06:40:32 -- ftl/common.sh@28 -- # stores=b5b214cb-ff99-4c1d-b2c8-40fa20e20809 00:15:21.917 06:40:32 -- ftl/common.sh@29 -- # for lvs in $stores 00:15:21.917 06:40:32 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b5b214cb-ff99-4c1d-b2c8-40fa20e20809 00:15:22.176 06:40:32 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:22.176 06:40:32 -- ftl/common.sh@68 -- # lvs=abfb6afc-8af1-4ca4-86f5-7b4b8c28c353 00:15:22.176 06:40:32 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u abfb6afc-8af1-4ca4-86f5-7b4b8c28c353 00:15:22.435 06:40:33 -- ftl/bdevperf.sh@23 -- # split_bdev=b93282ac-d470-4bb8-ab75-88a92d558559 00:15:22.435 06:40:33 -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:06.0 b93282ac-d470-4bb8-ab75-88a92d558559 00:15:22.435 06:40:33 -- ftl/common.sh@35 -- # local name=nvc0 00:15:22.435 06:40:33 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:15:22.435 06:40:33 -- ftl/common.sh@37 -- # local base_bdev=b93282ac-d470-4bb8-ab75-88a92d558559 00:15:22.435 06:40:33 -- ftl/common.sh@38 -- # local cache_size= 00:15:22.435 06:40:33 -- ftl/common.sh@41 -- # get_bdev_size b93282ac-d470-4bb8-ab75-88a92d558559 00:15:22.435 06:40:33 -- common/autotest_common.sh@1367 -- # local bdev_name=b93282ac-d470-4bb8-ab75-88a92d558559 00:15:22.435 06:40:33 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:22.435 06:40:33 -- common/autotest_common.sh@1369 -- # local bs 00:15:22.435 06:40:33 -- common/autotest_common.sh@1370 -- # local nb 00:15:22.435 06:40:33 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b93282ac-d470-4bb8-ab75-88a92d558559 00:15:22.694 06:40:33 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:22.694 { 00:15:22.694 "name": "b93282ac-d470-4bb8-ab75-88a92d558559", 00:15:22.694 "aliases": [ 00:15:22.694 "lvs/nvme0n1p0" 00:15:22.694 ], 00:15:22.694 "product_name": "Logical Volume", 00:15:22.694 "block_size": 4096, 00:15:22.694 "num_blocks": 26476544, 00:15:22.694 "uuid": "b93282ac-d470-4bb8-ab75-88a92d558559", 00:15:22.694 "assigned_rate_limits": { 00:15:22.694 "rw_ios_per_sec": 0, 00:15:22.694 "rw_mbytes_per_sec": 0, 00:15:22.694 "r_mbytes_per_sec": 0, 00:15:22.694 "w_mbytes_per_sec": 0 00:15:22.694 }, 00:15:22.694 "claimed": false, 00:15:22.694 "zoned": false, 00:15:22.694 "supported_io_types": { 00:15:22.694 "read": true, 00:15:22.694 "write": true, 00:15:22.694 "unmap": true, 00:15:22.694 "write_zeroes": true, 00:15:22.694 "flush": false, 00:15:22.694 "reset": true, 00:15:22.694 "compare": false, 00:15:22.694 "compare_and_write": false, 00:15:22.694 "abort": false, 00:15:22.694 "nvme_admin": false, 00:15:22.694 "nvme_io": false 00:15:22.694 }, 00:15:22.694 "driver_specific": { 00:15:22.694 "lvol": { 00:15:22.694 "lvol_store_uuid": "abfb6afc-8af1-4ca4-86f5-7b4b8c28c353", 00:15:22.694 "base_bdev": "nvme0n1", 00:15:22.694 "thin_provision": true, 00:15:22.694 "snapshot": false, 00:15:22.694 "clone": false, 00:15:22.694 "esnap_clone": false 00:15:22.694 } 00:15:22.694 } 00:15:22.694 } 00:15:22.694 ]' 00:15:22.694 06:40:33 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:22.694 06:40:33 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:22.694 06:40:33 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:22.694 06:40:33 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:22.694 06:40:33 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:22.694 06:40:33 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:22.694 06:40:33 -- ftl/common.sh@41 -- # local base_size=5171 00:15:22.694 06:40:33 -- ftl/common.sh@44 -- # local nvc_bdev 00:15:22.694 06:40:33 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:15:22.953 06:40:33 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:22.953 06:40:33 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:22.953 06:40:33 -- ftl/common.sh@48 -- # get_bdev_size b93282ac-d470-4bb8-ab75-88a92d558559 00:15:22.953 06:40:33 -- common/autotest_common.sh@1367 -- # local bdev_name=b93282ac-d470-4bb8-ab75-88a92d558559 00:15:22.953 06:40:33 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:22.953 06:40:33 -- common/autotest_common.sh@1369 -- # local bs 00:15:22.953 06:40:33 -- common/autotest_common.sh@1370 -- # local nb 00:15:22.953 06:40:33 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b93282ac-d470-4bb8-ab75-88a92d558559 00:15:23.212 06:40:33 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:23.212 { 00:15:23.212 "name": "b93282ac-d470-4bb8-ab75-88a92d558559", 00:15:23.212 "aliases": [ 00:15:23.212 "lvs/nvme0n1p0" 00:15:23.212 ], 00:15:23.212 "product_name": "Logical Volume", 00:15:23.212 "block_size": 4096, 00:15:23.212 "num_blocks": 26476544, 00:15:23.212 "uuid": "b93282ac-d470-4bb8-ab75-88a92d558559", 00:15:23.212 "assigned_rate_limits": { 00:15:23.212 "rw_ios_per_sec": 0, 00:15:23.212 "rw_mbytes_per_sec": 0, 00:15:23.212 "r_mbytes_per_sec": 0, 00:15:23.212 "w_mbytes_per_sec": 0 00:15:23.212 }, 00:15:23.212 "claimed": false, 00:15:23.212 "zoned": false, 00:15:23.212 "supported_io_types": { 00:15:23.212 "read": true, 00:15:23.212 "write": true, 00:15:23.212 "unmap": true, 00:15:23.212 "write_zeroes": true, 00:15:23.212 "flush": false, 00:15:23.212 "reset": true, 00:15:23.212 "compare": false, 00:15:23.212 "compare_and_write": false, 00:15:23.212 "abort": false, 00:15:23.212 "nvme_admin": false, 00:15:23.212 "nvme_io": false 00:15:23.212 }, 00:15:23.212 "driver_specific": { 00:15:23.212 "lvol": { 00:15:23.212 "lvol_store_uuid": "abfb6afc-8af1-4ca4-86f5-7b4b8c28c353", 00:15:23.212 "base_bdev": "nvme0n1", 00:15:23.212 "thin_provision": true, 00:15:23.212 "snapshot": false, 00:15:23.212 "clone": false, 00:15:23.212 "esnap_clone": false 00:15:23.212 } 00:15:23.212 } 00:15:23.212 } 00:15:23.212 ]' 00:15:23.212 06:40:33 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:23.212 06:40:33 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:23.212 06:40:33 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:23.212 06:40:33 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:23.212 06:40:33 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:23.212 06:40:33 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:23.212 06:40:33 -- ftl/common.sh@48 -- # cache_size=5171 00:15:23.212 06:40:33 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:23.471 06:40:33 -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:15:23.471 06:40:33 -- ftl/bdevperf.sh@26 -- # get_bdev_size b93282ac-d470-4bb8-ab75-88a92d558559 00:15:23.471 06:40:33 -- common/autotest_common.sh@1367 -- # local bdev_name=b93282ac-d470-4bb8-ab75-88a92d558559 00:15:23.471 06:40:33 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:23.471 06:40:33 -- common/autotest_common.sh@1369 -- # local bs 00:15:23.471 06:40:33 -- common/autotest_common.sh@1370 -- # local nb 00:15:23.471 06:40:33 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b93282ac-d470-4bb8-ab75-88a92d558559 00:15:23.471 06:40:34 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:23.471 { 00:15:23.471 "name": "b93282ac-d470-4bb8-ab75-88a92d558559", 00:15:23.471 "aliases": [ 00:15:23.471 "lvs/nvme0n1p0" 00:15:23.471 ], 00:15:23.471 "product_name": "Logical Volume", 00:15:23.471 "block_size": 4096, 00:15:23.471 "num_blocks": 26476544, 00:15:23.471 "uuid": "b93282ac-d470-4bb8-ab75-88a92d558559", 00:15:23.471 "assigned_rate_limits": { 00:15:23.471 "rw_ios_per_sec": 0, 00:15:23.471 "rw_mbytes_per_sec": 0, 00:15:23.471 "r_mbytes_per_sec": 0, 00:15:23.471 "w_mbytes_per_sec": 0 00:15:23.471 }, 00:15:23.471 "claimed": false, 00:15:23.471 "zoned": false, 00:15:23.471 "supported_io_types": { 00:15:23.471 "read": true, 00:15:23.471 "write": true, 00:15:23.471 "unmap": true, 00:15:23.471 "write_zeroes": true, 00:15:23.471 "flush": false, 00:15:23.471 "reset": true, 00:15:23.471 "compare": false, 00:15:23.471 "compare_and_write": false, 00:15:23.471 "abort": false, 00:15:23.471 "nvme_admin": false, 00:15:23.471 "nvme_io": false 00:15:23.471 }, 00:15:23.471 "driver_specific": { 00:15:23.471 "lvol": { 00:15:23.471 "lvol_store_uuid": "abfb6afc-8af1-4ca4-86f5-7b4b8c28c353", 00:15:23.471 "base_bdev": "nvme0n1", 00:15:23.471 "thin_provision": true, 00:15:23.471 "snapshot": false, 00:15:23.471 "clone": false, 00:15:23.471 "esnap_clone": false 00:15:23.471 } 00:15:23.471 } 00:15:23.471 } 00:15:23.471 ]' 00:15:23.471 06:40:34 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:23.471 06:40:34 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:23.471 06:40:34 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:23.731 06:40:34 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:23.731 06:40:34 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:23.731 06:40:34 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:23.731 06:40:34 -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:15:23.731 06:40:34 -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b93282ac-d470-4bb8-ab75-88a92d558559 -c nvc0n1p0 --l2p_dram_limit 20 00:15:23.731 [2024-11-28 06:40:34.414346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.731 [2024-11-28 06:40:34.414389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:23.731 [2024-11-28 06:40:34.414403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:23.731 [2024-11-28 06:40:34.414412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.731 [2024-11-28 06:40:34.414451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.731 [2024-11-28 06:40:34.414459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:23.731 [2024-11-28 06:40:34.414468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:15:23.731 [2024-11-28 06:40:34.414474] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.731 [2024-11-28 06:40:34.414488] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:23.731 [2024-11-28 06:40:34.414729] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:23.731 [2024-11-28 06:40:34.414744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.731 [2024-11-28 06:40:34.414749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:23.731 [2024-11-28 06:40:34.414757] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:15:23.731 [2024-11-28 06:40:34.414765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.731 [2024-11-28 06:40:34.414813] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f4a5499a-841c-4df2-a297-911b78687b49 00:15:23.731 [2024-11-28 06:40:34.415779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.731 [2024-11-28 06:40:34.415801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:23.731 [2024-11-28 06:40:34.415809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:15:23.731 [2024-11-28 06:40:34.415817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.731 [2024-11-28 06:40:34.420484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.731 [2024-11-28 06:40:34.420514] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:23.731 [2024-11-28 06:40:34.420522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.639 ms 00:15:23.731 [2024-11-28 06:40:34.420531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.731 [2024-11-28 06:40:34.420594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.731 [2024-11-28 06:40:34.420603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:23.731 [2024-11-28 06:40:34.420609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:15:23.731 [2024-11-28 06:40:34.420616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.731 [2024-11-28 06:40:34.420649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.731 [2024-11-28 06:40:34.420657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:23.731 [2024-11-28 06:40:34.420663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:15:23.731 [2024-11-28 06:40:34.420670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.732 [2024-11-28 06:40:34.420692] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:23.732 [2024-11-28 06:40:34.421966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.732 [2024-11-28 06:40:34.421989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:23.732 [2024-11-28 06:40:34.421998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.281 ms 00:15:23.732 [2024-11-28 06:40:34.422005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.732 [2024-11-28 06:40:34.422031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.732 [2024-11-28 06:40:34.422041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:23.732 [2024-11-28 06:40:34.422049] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:23.732 [2024-11-28 06:40:34.422055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.732 [2024-11-28 06:40:34.422073] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:23.732 [2024-11-28 06:40:34.422159] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:23.732 [2024-11-28 06:40:34.422169] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:23.732 [2024-11-28 06:40:34.422177] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:23.732 [2024-11-28 06:40:34.422189] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:23.732 [2024-11-28 06:40:34.422195] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:23.732 [2024-11-28 06:40:34.422206] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:23.732 [2024-11-28 06:40:34.422211] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:23.732 [2024-11-28 06:40:34.422220] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:23.732 [2024-11-28 06:40:34.422228] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:23.732 [2024-11-28 06:40:34.422234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.732 [2024-11-28 06:40:34.422240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:23.732 [2024-11-28 06:40:34.422251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:15:23.732 [2024-11-28 06:40:34.422256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.732 [2024-11-28 06:40:34.422305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.732 [2024-11-28 06:40:34.422311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:23.732 [2024-11-28 06:40:34.422318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:15:23.732 [2024-11-28 06:40:34.422323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.732 [2024-11-28 06:40:34.422378] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:23.732 [2024-11-28 06:40:34.422384] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:23.732 [2024-11-28 06:40:34.422392] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:23.732 [2024-11-28 06:40:34.422399] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:23.732 [2024-11-28 06:40:34.422448] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:23.732 [2024-11-28 06:40:34.422453] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:23.732 [2024-11-28 06:40:34.422459] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:23.732 [2024-11-28 06:40:34.422464] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:23.732 [2024-11-28 06:40:34.422471] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:23.732 [2024-11-28 06:40:34.422476] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:23.732 [2024-11-28 06:40:34.422482] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:23.732 [2024-11-28 06:40:34.422487] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:23.732 [2024-11-28 06:40:34.422495] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:23.732 [2024-11-28 06:40:34.422502] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:23.732 [2024-11-28 06:40:34.422508] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:15:23.732 [2024-11-28 06:40:34.422513] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:23.732 [2024-11-28 06:40:34.422520] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:23.732 [2024-11-28 06:40:34.422525] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:15:23.732 [2024-11-28 06:40:34.422532] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:23.732 [2024-11-28 06:40:34.422537] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:23.732 [2024-11-28 06:40:34.422544] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:15:23.732 [2024-11-28 06:40:34.422549] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:23.732 [2024-11-28 06:40:34.422556] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:23.732 [2024-11-28 06:40:34.422561] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:23.732 [2024-11-28 06:40:34.422567] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:23.732 [2024-11-28 06:40:34.422571] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:23.732 [2024-11-28 06:40:34.422577] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:15:23.732 [2024-11-28 06:40:34.422582] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:23.732 [2024-11-28 06:40:34.422590] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:23.732 [2024-11-28 06:40:34.422596] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:23.732 [2024-11-28 06:40:34.422602] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:23.732 [2024-11-28 06:40:34.422607] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:23.732 [2024-11-28 06:40:34.422612] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:15:23.732 [2024-11-28 06:40:34.422617] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:23.732 [2024-11-28 06:40:34.422624] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:23.732 [2024-11-28 06:40:34.422628] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:23.732 [2024-11-28 06:40:34.422635] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:23.732 [2024-11-28 06:40:34.422640] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:23.732 [2024-11-28 06:40:34.422647] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:15:23.732 [2024-11-28 06:40:34.422652] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:23.732 [2024-11-28 06:40:34.422658] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:23.732 [2024-11-28 06:40:34.422664] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:23.732 [2024-11-28 06:40:34.422670] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:23.732 [2024-11-28 06:40:34.422676] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:23.732 [2024-11-28 06:40:34.422684] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:23.732 [2024-11-28 06:40:34.422690] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:23.732 [2024-11-28 06:40:34.422696] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:23.732 [2024-11-28 06:40:34.422716] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:23.732 [2024-11-28 06:40:34.422723] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:23.732 [2024-11-28 06:40:34.422728] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:23.732 [2024-11-28 06:40:34.422735] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:23.732 [2024-11-28 06:40:34.422742] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:23.732 [2024-11-28 06:40:34.422754] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:23.732 [2024-11-28 06:40:34.422760] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:15:23.732 [2024-11-28 06:40:34.422766] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:15:23.732 [2024-11-28 06:40:34.422772] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:15:23.732 [2024-11-28 06:40:34.422778] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:15:23.732 [2024-11-28 06:40:34.422784] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:15:23.732 [2024-11-28 06:40:34.422790] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:15:23.732 [2024-11-28 06:40:34.422796] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:15:23.732 [2024-11-28 06:40:34.422803] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:15:23.732 [2024-11-28 06:40:34.422813] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:15:23.732 [2024-11-28 06:40:34.422820] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:15:23.732 [2024-11-28 06:40:34.422825] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:15:23.732 [2024-11-28 06:40:34.422832] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:15:23.732 [2024-11-28 06:40:34.422837] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:23.732 [2024-11-28 06:40:34.422844] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:23.732 [2024-11-28 06:40:34.422850] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:23.732 [2024-11-28 06:40:34.422858] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:23.732 [2024-11-28 06:40:34.422863] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:23.732 [2024-11-28 06:40:34.422870] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:23.732 [2024-11-28 06:40:34.422875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.733 [2024-11-28 06:40:34.422885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:23.733 [2024-11-28 06:40:34.422891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:15:23.733 [2024-11-28 06:40:34.422898] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.733 [2024-11-28 06:40:34.428318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.733 [2024-11-28 06:40:34.428420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:23.733 [2024-11-28 06:40:34.428469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.392 ms 00:15:23.733 [2024-11-28 06:40:34.428489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.733 [2024-11-28 06:40:34.428565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.733 [2024-11-28 06:40:34.428619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:23.733 [2024-11-28 06:40:34.428734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:15:23.733 [2024-11-28 06:40:34.428755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.733 [2024-11-28 06:40:34.442963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.733 [2024-11-28 06:40:34.443081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:23.733 [2024-11-28 06:40:34.443126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.166 ms 00:15:23.733 [2024-11-28 06:40:34.443146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.733 [2024-11-28 06:40:34.443182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.733 [2024-11-28 06:40:34.443204] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:23.733 [2024-11-28 06:40:34.443219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:23.733 [2024-11-28 06:40:34.443238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.733 [2024-11-28 06:40:34.443551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.733 [2024-11-28 06:40:34.443596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:23.733 [2024-11-28 06:40:34.443613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:15:23.733 [2024-11-28 06:40:34.443629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.733 [2024-11-28 06:40:34.443732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.733 [2024-11-28 06:40:34.443752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:23.733 [2024-11-28 06:40:34.443774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:15:23.733 [2024-11-28 06:40:34.443790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.733 [2024-11-28 06:40:34.449226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.733 [2024-11-28 06:40:34.449351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:23.733 [2024-11-28 06:40:34.449417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.374 ms 00:15:23.733 [2024-11-28 06:40:34.449491] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.733 [2024-11-28 06:40:34.458599] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:15:23.733 [2024-11-28 06:40:34.462888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.733 [2024-11-28 06:40:34.462975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:23.733 [2024-11-28 06:40:34.463018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.310 ms 00:15:23.733 [2024-11-28 06:40:34.463036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.992 [2024-11-28 06:40:34.520783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.992 [2024-11-28 06:40:34.520953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:23.992 [2024-11-28 06:40:34.521021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.710 ms 00:15:23.992 [2024-11-28 06:40:34.521051] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.992 [2024-11-28 06:40:34.521185] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:15:23.992 [2024-11-28 06:40:34.521251] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:15:26.525 [2024-11-28 06:40:36.760066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.525 [2024-11-28 06:40:36.760288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:26.525 [2024-11-28 06:40:36.760382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2238.865 ms 00:15:26.525 [2024-11-28 06:40:36.760407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.525 [2024-11-28 06:40:36.760603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.525 [2024-11-28 06:40:36.760680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:26.525 [2024-11-28 06:40:36.760721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:15:26.525 [2024-11-28 06:40:36.760740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.525 [2024-11-28 06:40:36.763991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.525 [2024-11-28 06:40:36.764027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:26.525 [2024-11-28 06:40:36.764042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.227 ms 00:15:26.525 [2024-11-28 06:40:36.764050] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.525 [2024-11-28 06:40:36.766377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.525 [2024-11-28 06:40:36.766406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:26.525 [2024-11-28 06:40:36.766418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.301 ms 00:15:26.525 [2024-11-28 06:40:36.766426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.525 [2024-11-28 06:40:36.766583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.525 [2024-11-28 06:40:36.766594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:26.525 [2024-11-28 06:40:36.766608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:15:26.525 [2024-11-28 06:40:36.766619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.525 [2024-11-28 06:40:36.787642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.525 [2024-11-28 06:40:36.787683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:26.525 [2024-11-28 06:40:36.787696] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.991 ms 00:15:26.525 [2024-11-28 06:40:36.787721] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.525 [2024-11-28 06:40:36.791551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.525 [2024-11-28 06:40:36.791682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:26.525 [2024-11-28 06:40:36.791713] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.791 ms 00:15:26.525 [2024-11-28 06:40:36.791724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.525 [2024-11-28 06:40:36.792931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.525 [2024-11-28 06:40:36.792957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:26.525 [2024-11-28 06:40:36.792967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.176 ms 00:15:26.525 [2024-11-28 06:40:36.792975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.525 [2024-11-28 06:40:36.795956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.525 [2024-11-28 06:40:36.795988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:26.525 [2024-11-28 06:40:36.796000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.959 ms 00:15:26.525 [2024-11-28 06:40:36.796008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.525 [2024-11-28 06:40:36.796044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.525 [2024-11-28 06:40:36.796055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:26.525 [2024-11-28 06:40:36.796066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:26.525 [2024-11-28 06:40:36.796074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.525 [2024-11-28 06:40:36.796136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.525 [2024-11-28 06:40:36.796150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:26.525 [2024-11-28 06:40:36.796163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:15:26.525 [2024-11-28 06:40:36.796172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.525 [2024-11-28 06:40:36.797017] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2382.293 ms, result 0 00:15:26.525 { 00:15:26.525 "name": "ftl0", 00:15:26.525 "uuid": "f4a5499a-841c-4df2-a297-911b78687b49" 00:15:26.525 } 00:15:26.525 06:40:36 -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:15:26.525 06:40:36 -- ftl/bdevperf.sh@29 -- # jq -r .name 00:15:26.525 06:40:36 -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:15:26.525 06:40:37 -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:15:26.525 [2024-11-28 06:40:37.086309] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:26.525 I/O size of 69632 is greater than zero copy threshold (65536). 00:15:26.525 Zero copy mechanism will not be used. 00:15:26.525 Running I/O for 4 seconds... 00:15:30.714 00:15:30.715 Latency(us) 00:15:30.715 [2024-11-28T06:40:41.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.715 [2024-11-28T06:40:41.485Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:15:30.715 ftl0 : 4.00 3029.30 201.16 0.00 0.00 347.47 172.50 1083.86 00:15:30.715 [2024-11-28T06:40:41.485Z] =================================================================================================================== 00:15:30.715 [2024-11-28T06:40:41.485Z] Total : 3029.30 201.16 0.00 0.00 347.47 172.50 1083.86 00:15:30.715 0 00:15:30.715 [2024-11-28 06:40:41.092825] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:30.715 06:40:41 -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:15:30.715 [2024-11-28 06:40:41.201465] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:30.715 Running I/O for 4 seconds... 00:15:34.900 00:15:34.900 Latency(us) 00:15:34.900 [2024-11-28T06:40:45.670Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:34.900 [2024-11-28T06:40:45.670Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:15:34.900 ftl0 : 4.01 7424.78 29.00 0.00 0.00 17203.87 253.64 42346.34 00:15:34.900 [2024-11-28T06:40:45.670Z] =================================================================================================================== 00:15:34.900 [2024-11-28T06:40:45.670Z] Total : 7424.78 29.00 0.00 0.00 17203.87 0.00 42346.34 00:15:34.900 [2024-11-28 06:40:45.222774] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:34.900 0 00:15:34.900 06:40:45 -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:15:34.900 [2024-11-28 06:40:45.326375] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:34.900 Running I/O for 4 seconds... 00:15:39.089 00:15:39.089 Latency(us) 00:15:39.089 [2024-11-28T06:40:49.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.089 [2024-11-28T06:40:49.859Z] Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:39.089 Verification LBA range: start 0x0 length 0x1400000 00:15:39.089 ftl0 : 4.01 11283.51 44.08 0.00 0.00 11318.16 149.66 22685.54 00:15:39.089 [2024-11-28T06:40:49.859Z] =================================================================================================================== 00:15:39.089 [2024-11-28T06:40:49.859Z] Total : 11283.51 44.08 0.00 0.00 11318.16 0.00 22685.54 00:15:39.089 [2024-11-28 06:40:49.338294] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:39.089 0 00:15:39.089 06:40:49 -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:15:39.089 [2024-11-28 06:40:49.530599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.089 [2024-11-28 06:40:49.530638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:39.089 [2024-11-28 06:40:49.530651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:39.089 [2024-11-28 06:40:49.530660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.089 [2024-11-28 06:40:49.530686] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:39.089 [2024-11-28 06:40:49.531119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.089 [2024-11-28 06:40:49.531146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:39.089 [2024-11-28 06:40:49.531155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:15:39.089 [2024-11-28 06:40:49.531164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.089 [2024-11-28 06:40:49.533450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.089 [2024-11-28 06:40:49.533488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:39.089 [2024-11-28 06:40:49.533502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.269 ms 00:15:39.089 [2024-11-28 06:40:49.533511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.089 [2024-11-28 06:40:49.716366] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.089 [2024-11-28 06:40:49.716523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:39.089 [2024-11-28 06:40:49.716543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 182.839 ms 00:15:39.089 [2024-11-28 06:40:49.716553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.089 [2024-11-28 06:40:49.722632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.089 [2024-11-28 06:40:49.722662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:39.089 [2024-11-28 06:40:49.722672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.050 ms 00:15:39.089 [2024-11-28 06:40:49.722685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.089 [2024-11-28 06:40:49.724809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.089 [2024-11-28 06:40:49.724844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:39.089 [2024-11-28 06:40:49.724853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.072 ms 00:15:39.089 [2024-11-28 06:40:49.724861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.089 [2024-11-28 06:40:49.729638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.089 [2024-11-28 06:40:49.729672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:39.089 [2024-11-28 06:40:49.729681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.748 ms 00:15:39.089 [2024-11-28 06:40:49.729697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.089 [2024-11-28 06:40:49.729819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.089 [2024-11-28 06:40:49.729848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:39.089 [2024-11-28 06:40:49.729857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:15:39.089 [2024-11-28 06:40:49.729865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.089 [2024-11-28 06:40:49.732271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.089 [2024-11-28 06:40:49.732304] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:39.089 [2024-11-28 06:40:49.732313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.392 ms 00:15:39.089 [2024-11-28 06:40:49.732324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.089 [2024-11-28 06:40:49.734521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.089 [2024-11-28 06:40:49.734552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:39.089 [2024-11-28 06:40:49.734561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.164 ms 00:15:39.089 [2024-11-28 06:40:49.734569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.089 [2024-11-28 06:40:49.736490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.089 [2024-11-28 06:40:49.736522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:39.089 [2024-11-28 06:40:49.736530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.894 ms 00:15:39.089 [2024-11-28 06:40:49.736538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.089 [2024-11-28 06:40:49.738273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.089 [2024-11-28 06:40:49.738305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:39.089 [2024-11-28 06:40:49.738313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.689 ms 00:15:39.089 [2024-11-28 06:40:49.738321] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.089 [2024-11-28 06:40:49.738346] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:39.089 [2024-11-28 06:40:49.738367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:39.089 [2024-11-28 06:40:49.738377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:39.089 [2024-11-28 06:40:49.738389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:39.089 [2024-11-28 06:40:49.738396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:39.089 [2024-11-28 06:40:49.738405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:39.089 [2024-11-28 06:40:49.738413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:39.089 [2024-11-28 06:40:49.738422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:39.089 [2024-11-28 06:40:49.738429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:39.089 [2024-11-28 06:40:49.738438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:39.089 [2024-11-28 06:40:49.738445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:39.089 [2024-11-28 06:40:49.738454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:39.089 [2024-11-28 06:40:49.738462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:39.089 [2024-11-28 06:40:49.738470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:39.089 [2024-11-28 06:40:49.738478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:39.089 [2024-11-28 06:40:49.738487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:39.089 [2024-11-28 06:40:49.738494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:39.089 [2024-11-28 06:40:49.738503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.738999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.739008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.739015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.739024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.739031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.739040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.739047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.739057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.739064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.739075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.739082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.739091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.739099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.739108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.739115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.739124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.739131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.739140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.739147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.739156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.739163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.739172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.739179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.739188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.739196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.739207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.739214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:39.090 [2024-11-28 06:40:49.739231] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:39.090 [2024-11-28 06:40:49.739239] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f4a5499a-841c-4df2-a297-911b78687b49 00:15:39.090 [2024-11-28 06:40:49.739248] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:39.090 [2024-11-28 06:40:49.739255] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:39.090 [2024-11-28 06:40:49.739263] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:39.090 [2024-11-28 06:40:49.739271] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:39.090 [2024-11-28 06:40:49.739280] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:39.090 [2024-11-28 06:40:49.739288] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:39.091 [2024-11-28 06:40:49.739296] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:39.091 [2024-11-28 06:40:49.739302] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:39.091 [2024-11-28 06:40:49.739310] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:39.091 [2024-11-28 06:40:49.739317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.091 [2024-11-28 06:40:49.739325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:39.091 [2024-11-28 06:40:49.739335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.971 ms 00:15:39.091 [2024-11-28 06:40:49.739345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.091 [2024-11-28 06:40:49.740769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.091 [2024-11-28 06:40:49.740789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:39.091 [2024-11-28 06:40:49.740797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.402 ms 00:15:39.091 [2024-11-28 06:40:49.740806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.091 [2024-11-28 06:40:49.740870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.091 [2024-11-28 06:40:49.740882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:39.091 [2024-11-28 06:40:49.740890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:15:39.091 [2024-11-28 06:40:49.740898] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.091 [2024-11-28 06:40:49.745982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:39.091 [2024-11-28 06:40:49.746015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:39.091 [2024-11-28 06:40:49.746024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:39.091 [2024-11-28 06:40:49.746033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.091 [2024-11-28 06:40:49.746081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:39.091 [2024-11-28 06:40:49.746096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:39.091 [2024-11-28 06:40:49.746104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:39.091 [2024-11-28 06:40:49.746114] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.091 [2024-11-28 06:40:49.746168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:39.091 [2024-11-28 06:40:49.746179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:39.091 [2024-11-28 06:40:49.746187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:39.091 [2024-11-28 06:40:49.746195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.091 [2024-11-28 06:40:49.746209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:39.091 [2024-11-28 06:40:49.746218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:39.091 [2024-11-28 06:40:49.746233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:39.091 [2024-11-28 06:40:49.746241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.091 [2024-11-28 06:40:49.754325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:39.091 [2024-11-28 06:40:49.754366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:39.091 [2024-11-28 06:40:49.754376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:39.091 [2024-11-28 06:40:49.754384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.091 [2024-11-28 06:40:49.757969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:39.091 [2024-11-28 06:40:49.758004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:39.091 [2024-11-28 06:40:49.758016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:39.091 [2024-11-28 06:40:49.758027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.091 [2024-11-28 06:40:49.758064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:39.091 [2024-11-28 06:40:49.758074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:39.091 [2024-11-28 06:40:49.758082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:39.091 [2024-11-28 06:40:49.758094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.091 [2024-11-28 06:40:49.758129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:39.091 [2024-11-28 06:40:49.758140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:39.091 [2024-11-28 06:40:49.758147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:39.091 [2024-11-28 06:40:49.758158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.091 [2024-11-28 06:40:49.758223] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:39.091 [2024-11-28 06:40:49.758234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:39.091 [2024-11-28 06:40:49.758241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:39.091 [2024-11-28 06:40:49.758250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.091 [2024-11-28 06:40:49.758276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:39.091 [2024-11-28 06:40:49.758286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:39.091 [2024-11-28 06:40:49.758294] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:39.091 [2024-11-28 06:40:49.758304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.091 [2024-11-28 06:40:49.758340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:39.091 [2024-11-28 06:40:49.758351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:39.091 [2024-11-28 06:40:49.758358] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:39.091 [2024-11-28 06:40:49.758366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.091 [2024-11-28 06:40:49.758403] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:39.091 [2024-11-28 06:40:49.758417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:39.091 [2024-11-28 06:40:49.758425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:39.091 [2024-11-28 06:40:49.758435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.091 [2024-11-28 06:40:49.758552] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 227.926 ms, result 0 00:15:39.091 true 00:15:39.091 06:40:49 -- ftl/bdevperf.sh@37 -- # killprocess 82476 00:15:39.091 06:40:49 -- common/autotest_common.sh@936 -- # '[' -z 82476 ']' 00:15:39.091 06:40:49 -- common/autotest_common.sh@940 -- # kill -0 82476 00:15:39.091 06:40:49 -- common/autotest_common.sh@941 -- # uname 00:15:39.091 06:40:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:39.091 06:40:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82476 00:15:39.091 killing process with pid 82476 00:15:39.091 Received shutdown signal, test time was about 4.000000 seconds 00:15:39.091 00:15:39.091 Latency(us) 00:15:39.091 [2024-11-28T06:40:49.861Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.091 [2024-11-28T06:40:49.861Z] =================================================================================================================== 00:15:39.091 [2024-11-28T06:40:49.861Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:39.091 06:40:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:39.091 06:40:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:39.091 06:40:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82476' 00:15:39.091 06:40:49 -- common/autotest_common.sh@955 -- # kill 82476 00:15:39.091 06:40:49 -- common/autotest_common.sh@960 -- # wait 82476 00:15:44.363 06:40:54 -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:15:44.363 06:40:54 -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:15:44.363 06:40:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:44.363 06:40:54 -- common/autotest_common.sh@10 -- # set +x 00:15:44.363 Remove shared memory files 00:15:44.363 06:40:54 -- ftl/bdevperf.sh@41 -- # remove_shm 00:15:44.363 06:40:54 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:15:44.363 06:40:54 -- ftl/common.sh@205 -- # rm -f rm -f 00:15:44.363 06:40:54 -- ftl/common.sh@206 -- # rm -f rm -f 00:15:44.363 06:40:54 -- ftl/common.sh@207 -- # rm -f rm -f 00:15:44.363 06:40:54 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:15:44.363 06:40:54 -- ftl/common.sh@209 -- # rm -f rm -f 00:15:44.363 ************************************ 00:15:44.363 END TEST ftl_bdevperf 00:15:44.363 ************************************ 00:15:44.363 00:15:44.363 real 0m23.631s 00:15:44.363 user 0m26.017s 00:15:44.363 sys 0m0.808s 00:15:44.363 06:40:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:44.363 06:40:54 -- common/autotest_common.sh@10 -- # set +x 00:15:44.363 06:40:54 -- ftl/ftl.sh@76 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:15:44.363 06:40:54 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:44.363 06:40:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:44.363 06:40:54 -- common/autotest_common.sh@10 -- # set +x 00:15:44.363 ************************************ 00:15:44.363 START TEST ftl_trim 00:15:44.363 ************************************ 00:15:44.363 06:40:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:15:44.363 * Looking for test storage... 00:15:44.363 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:44.363 06:40:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:44.363 06:40:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:44.363 06:40:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:44.363 06:40:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:44.363 06:40:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:44.363 06:40:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:44.363 06:40:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:44.363 06:40:54 -- scripts/common.sh@335 -- # IFS=.-: 00:15:44.363 06:40:54 -- scripts/common.sh@335 -- # read -ra ver1 00:15:44.363 06:40:54 -- scripts/common.sh@336 -- # IFS=.-: 00:15:44.363 06:40:54 -- scripts/common.sh@336 -- # read -ra ver2 00:15:44.363 06:40:54 -- scripts/common.sh@337 -- # local 'op=<' 00:15:44.363 06:40:54 -- scripts/common.sh@339 -- # ver1_l=2 00:15:44.363 06:40:54 -- scripts/common.sh@340 -- # ver2_l=1 00:15:44.363 06:40:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:44.363 06:40:54 -- scripts/common.sh@343 -- # case "$op" in 00:15:44.363 06:40:54 -- scripts/common.sh@344 -- # : 1 00:15:44.363 06:40:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:44.363 06:40:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:44.363 06:40:54 -- scripts/common.sh@364 -- # decimal 1 00:15:44.363 06:40:54 -- scripts/common.sh@352 -- # local d=1 00:15:44.363 06:40:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:44.363 06:40:54 -- scripts/common.sh@354 -- # echo 1 00:15:44.363 06:40:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:44.363 06:40:54 -- scripts/common.sh@365 -- # decimal 2 00:15:44.363 06:40:54 -- scripts/common.sh@352 -- # local d=2 00:15:44.363 06:40:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:44.363 06:40:54 -- scripts/common.sh@354 -- # echo 2 00:15:44.363 06:40:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:44.363 06:40:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:44.363 06:40:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:44.363 06:40:54 -- scripts/common.sh@367 -- # return 0 00:15:44.363 06:40:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:44.363 06:40:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:44.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.363 --rc genhtml_branch_coverage=1 00:15:44.363 --rc genhtml_function_coverage=1 00:15:44.363 --rc genhtml_legend=1 00:15:44.363 --rc geninfo_all_blocks=1 00:15:44.363 --rc geninfo_unexecuted_blocks=1 00:15:44.363 00:15:44.363 ' 00:15:44.363 06:40:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:44.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.363 --rc genhtml_branch_coverage=1 00:15:44.363 --rc genhtml_function_coverage=1 00:15:44.363 --rc genhtml_legend=1 00:15:44.363 --rc geninfo_all_blocks=1 00:15:44.363 --rc geninfo_unexecuted_blocks=1 00:15:44.363 00:15:44.363 ' 00:15:44.363 06:40:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:44.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.363 --rc genhtml_branch_coverage=1 00:15:44.363 --rc genhtml_function_coverage=1 00:15:44.363 --rc genhtml_legend=1 00:15:44.363 --rc geninfo_all_blocks=1 00:15:44.363 --rc geninfo_unexecuted_blocks=1 00:15:44.363 00:15:44.363 ' 00:15:44.363 06:40:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:44.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.363 --rc genhtml_branch_coverage=1 00:15:44.363 --rc genhtml_function_coverage=1 00:15:44.363 --rc genhtml_legend=1 00:15:44.363 --rc geninfo_all_blocks=1 00:15:44.363 --rc geninfo_unexecuted_blocks=1 00:15:44.363 00:15:44.363 ' 00:15:44.363 06:40:54 -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:44.363 06:40:54 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:15:44.363 06:40:54 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:44.363 06:40:54 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:44.363 06:40:54 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:44.363 06:40:54 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:44.363 06:40:54 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:44.363 06:40:54 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:44.363 06:40:54 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:44.363 06:40:54 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:44.363 06:40:54 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:44.363 06:40:54 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:44.363 06:40:54 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:44.363 06:40:54 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:44.363 06:40:54 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:44.363 06:40:54 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:44.363 06:40:54 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:44.363 06:40:54 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:44.363 06:40:54 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:44.363 06:40:54 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:44.363 06:40:54 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:44.363 06:40:54 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:44.363 06:40:54 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:44.363 06:40:54 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:44.363 06:40:54 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:44.363 06:40:54 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:44.363 06:40:54 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:44.363 06:40:54 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:44.364 06:40:54 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:44.364 06:40:54 -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:44.364 06:40:54 -- ftl/trim.sh@23 -- # device=0000:00:07.0 00:15:44.364 06:40:54 -- ftl/trim.sh@24 -- # cache_device=0000:00:06.0 00:15:44.364 06:40:54 -- ftl/trim.sh@25 -- # timeout=240 00:15:44.364 06:40:54 -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:15:44.364 06:40:54 -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:15:44.364 06:40:54 -- ftl/trim.sh@29 -- # [[ y != y ]] 00:15:44.364 06:40:54 -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:15:44.364 06:40:54 -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:15:44.364 06:40:54 -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:44.364 06:40:54 -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:44.364 06:40:54 -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:44.364 06:40:54 -- ftl/trim.sh@40 -- # svcpid=82821 00:15:44.364 06:40:54 -- ftl/trim.sh@41 -- # waitforlisten 82821 00:15:44.364 06:40:54 -- common/autotest_common.sh@829 -- # '[' -z 82821 ']' 00:15:44.364 06:40:54 -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:15:44.364 06:40:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.364 06:40:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:44.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.364 06:40:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.364 06:40:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:44.364 06:40:54 -- common/autotest_common.sh@10 -- # set +x 00:15:44.364 [2024-11-28 06:40:54.714382] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:44.364 [2024-11-28 06:40:54.714653] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82821 ] 00:15:44.364 [2024-11-28 06:40:54.849780] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:44.364 [2024-11-28 06:40:54.880401] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:44.364 [2024-11-28 06:40:54.880853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.364 [2024-11-28 06:40:54.880905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.364 [2024-11-28 06:40:54.880951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:44.931 06:40:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:44.931 06:40:55 -- common/autotest_common.sh@862 -- # return 0 00:15:44.931 06:40:55 -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:15:44.931 06:40:55 -- ftl/common.sh@54 -- # local name=nvme0 00:15:44.931 06:40:55 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:15:44.931 06:40:55 -- ftl/common.sh@56 -- # local size=103424 00:15:44.931 06:40:55 -- ftl/common.sh@59 -- # local base_bdev 00:15:44.931 06:40:55 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:15:45.190 06:40:55 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:45.190 06:40:55 -- ftl/common.sh@62 -- # local base_size 00:15:45.190 06:40:55 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:45.190 06:40:55 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:15:45.190 06:40:55 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:45.190 06:40:55 -- common/autotest_common.sh@1369 -- # local bs 00:15:45.190 06:40:55 -- common/autotest_common.sh@1370 -- # local nb 00:15:45.190 06:40:55 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:45.449 06:40:55 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:45.449 { 00:15:45.449 "name": "nvme0n1", 00:15:45.449 "aliases": [ 00:15:45.449 "2bfe44a7-27ec-40b1-a026-f0b190acaa11" 00:15:45.449 ], 00:15:45.449 "product_name": "NVMe disk", 00:15:45.449 "block_size": 4096, 00:15:45.449 "num_blocks": 1310720, 00:15:45.449 "uuid": "2bfe44a7-27ec-40b1-a026-f0b190acaa11", 00:15:45.449 "assigned_rate_limits": { 00:15:45.449 "rw_ios_per_sec": 0, 00:15:45.449 "rw_mbytes_per_sec": 0, 00:15:45.449 "r_mbytes_per_sec": 0, 00:15:45.449 "w_mbytes_per_sec": 0 00:15:45.449 }, 00:15:45.449 "claimed": true, 00:15:45.449 "claim_type": "read_many_write_one", 00:15:45.449 "zoned": false, 00:15:45.449 "supported_io_types": { 00:15:45.449 "read": true, 00:15:45.449 "write": true, 00:15:45.449 "unmap": true, 00:15:45.449 "write_zeroes": true, 00:15:45.449 "flush": true, 00:15:45.449 "reset": true, 00:15:45.449 "compare": true, 00:15:45.449 "compare_and_write": false, 00:15:45.449 "abort": true, 00:15:45.449 "nvme_admin": true, 00:15:45.449 "nvme_io": true 00:15:45.449 }, 00:15:45.449 "driver_specific": { 00:15:45.449 "nvme": [ 00:15:45.449 { 00:15:45.449 "pci_address": "0000:00:07.0", 00:15:45.449 "trid": { 00:15:45.449 "trtype": "PCIe", 00:15:45.449 "traddr": "0000:00:07.0" 00:15:45.449 }, 00:15:45.449 "ctrlr_data": { 00:15:45.449 "cntlid": 0, 00:15:45.449 "vendor_id": "0x1b36", 00:15:45.449 "model_number": "QEMU NVMe Ctrl", 00:15:45.449 "serial_number": "12341", 00:15:45.449 "firmware_revision": "8.0.0", 00:15:45.449 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:45.449 "oacs": { 00:15:45.449 "security": 0, 00:15:45.449 "format": 1, 00:15:45.449 "firmware": 0, 00:15:45.449 "ns_manage": 1 00:15:45.449 }, 00:15:45.449 "multi_ctrlr": false, 00:15:45.449 "ana_reporting": false 00:15:45.449 }, 00:15:45.449 "vs": { 00:15:45.449 "nvme_version": "1.4" 00:15:45.449 }, 00:15:45.449 "ns_data": { 00:15:45.449 "id": 1, 00:15:45.449 "can_share": false 00:15:45.449 } 00:15:45.449 } 00:15:45.449 ], 00:15:45.449 "mp_policy": "active_passive" 00:15:45.449 } 00:15:45.449 } 00:15:45.449 ]' 00:15:45.449 06:40:55 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:45.449 06:40:56 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:45.449 06:40:56 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:45.449 06:40:56 -- common/autotest_common.sh@1373 -- # nb=1310720 00:15:45.449 06:40:56 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:15:45.449 06:40:56 -- common/autotest_common.sh@1377 -- # echo 5120 00:15:45.449 06:40:56 -- ftl/common.sh@63 -- # base_size=5120 00:15:45.449 06:40:56 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:45.449 06:40:56 -- ftl/common.sh@67 -- # clear_lvols 00:15:45.449 06:40:56 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:45.449 06:40:56 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:45.708 06:40:56 -- ftl/common.sh@28 -- # stores=abfb6afc-8af1-4ca4-86f5-7b4b8c28c353 00:15:45.708 06:40:56 -- ftl/common.sh@29 -- # for lvs in $stores 00:15:45.708 06:40:56 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u abfb6afc-8af1-4ca4-86f5-7b4b8c28c353 00:15:45.708 06:40:56 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:45.967 06:40:56 -- ftl/common.sh@68 -- # lvs=0e8f1e28-e12c-49a9-8346-a3f584cf9225 00:15:45.967 06:40:56 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0e8f1e28-e12c-49a9-8346-a3f584cf9225 00:15:46.225 06:40:56 -- ftl/trim.sh@43 -- # split_bdev=67f384db-4941-4cb9-9a90-8de637eabcd2 00:15:46.225 06:40:56 -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:06.0 67f384db-4941-4cb9-9a90-8de637eabcd2 00:15:46.225 06:40:56 -- ftl/common.sh@35 -- # local name=nvc0 00:15:46.225 06:40:56 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:15:46.225 06:40:56 -- ftl/common.sh@37 -- # local base_bdev=67f384db-4941-4cb9-9a90-8de637eabcd2 00:15:46.225 06:40:56 -- ftl/common.sh@38 -- # local cache_size= 00:15:46.225 06:40:56 -- ftl/common.sh@41 -- # get_bdev_size 67f384db-4941-4cb9-9a90-8de637eabcd2 00:15:46.225 06:40:56 -- common/autotest_common.sh@1367 -- # local bdev_name=67f384db-4941-4cb9-9a90-8de637eabcd2 00:15:46.225 06:40:56 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:46.225 06:40:56 -- common/autotest_common.sh@1369 -- # local bs 00:15:46.225 06:40:56 -- common/autotest_common.sh@1370 -- # local nb 00:15:46.225 06:40:56 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 67f384db-4941-4cb9-9a90-8de637eabcd2 00:15:46.484 06:40:57 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:46.484 { 00:15:46.484 "name": "67f384db-4941-4cb9-9a90-8de637eabcd2", 00:15:46.484 "aliases": [ 00:15:46.484 "lvs/nvme0n1p0" 00:15:46.484 ], 00:15:46.484 "product_name": "Logical Volume", 00:15:46.484 "block_size": 4096, 00:15:46.484 "num_blocks": 26476544, 00:15:46.484 "uuid": "67f384db-4941-4cb9-9a90-8de637eabcd2", 00:15:46.484 "assigned_rate_limits": { 00:15:46.484 "rw_ios_per_sec": 0, 00:15:46.484 "rw_mbytes_per_sec": 0, 00:15:46.484 "r_mbytes_per_sec": 0, 00:15:46.484 "w_mbytes_per_sec": 0 00:15:46.484 }, 00:15:46.484 "claimed": false, 00:15:46.484 "zoned": false, 00:15:46.484 "supported_io_types": { 00:15:46.484 "read": true, 00:15:46.484 "write": true, 00:15:46.484 "unmap": true, 00:15:46.484 "write_zeroes": true, 00:15:46.484 "flush": false, 00:15:46.484 "reset": true, 00:15:46.484 "compare": false, 00:15:46.484 "compare_and_write": false, 00:15:46.484 "abort": false, 00:15:46.484 "nvme_admin": false, 00:15:46.484 "nvme_io": false 00:15:46.484 }, 00:15:46.484 "driver_specific": { 00:15:46.484 "lvol": { 00:15:46.484 "lvol_store_uuid": "0e8f1e28-e12c-49a9-8346-a3f584cf9225", 00:15:46.484 "base_bdev": "nvme0n1", 00:15:46.484 "thin_provision": true, 00:15:46.484 "snapshot": false, 00:15:46.484 "clone": false, 00:15:46.484 "esnap_clone": false 00:15:46.484 } 00:15:46.484 } 00:15:46.484 } 00:15:46.484 ]' 00:15:46.484 06:40:57 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:46.484 06:40:57 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:46.484 06:40:57 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:46.484 06:40:57 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:46.484 06:40:57 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:46.484 06:40:57 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:46.484 06:40:57 -- ftl/common.sh@41 -- # local base_size=5171 00:15:46.484 06:40:57 -- ftl/common.sh@44 -- # local nvc_bdev 00:15:46.484 06:40:57 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:15:46.743 06:40:57 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:46.743 06:40:57 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:46.743 06:40:57 -- ftl/common.sh@48 -- # get_bdev_size 67f384db-4941-4cb9-9a90-8de637eabcd2 00:15:46.743 06:40:57 -- common/autotest_common.sh@1367 -- # local bdev_name=67f384db-4941-4cb9-9a90-8de637eabcd2 00:15:46.743 06:40:57 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:46.743 06:40:57 -- common/autotest_common.sh@1369 -- # local bs 00:15:46.743 06:40:57 -- common/autotest_common.sh@1370 -- # local nb 00:15:46.743 06:40:57 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 67f384db-4941-4cb9-9a90-8de637eabcd2 00:15:47.078 06:40:57 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:47.078 { 00:15:47.078 "name": "67f384db-4941-4cb9-9a90-8de637eabcd2", 00:15:47.078 "aliases": [ 00:15:47.078 "lvs/nvme0n1p0" 00:15:47.078 ], 00:15:47.078 "product_name": "Logical Volume", 00:15:47.078 "block_size": 4096, 00:15:47.078 "num_blocks": 26476544, 00:15:47.078 "uuid": "67f384db-4941-4cb9-9a90-8de637eabcd2", 00:15:47.078 "assigned_rate_limits": { 00:15:47.079 "rw_ios_per_sec": 0, 00:15:47.079 "rw_mbytes_per_sec": 0, 00:15:47.079 "r_mbytes_per_sec": 0, 00:15:47.079 "w_mbytes_per_sec": 0 00:15:47.079 }, 00:15:47.079 "claimed": false, 00:15:47.079 "zoned": false, 00:15:47.079 "supported_io_types": { 00:15:47.079 "read": true, 00:15:47.079 "write": true, 00:15:47.079 "unmap": true, 00:15:47.079 "write_zeroes": true, 00:15:47.079 "flush": false, 00:15:47.079 "reset": true, 00:15:47.079 "compare": false, 00:15:47.079 "compare_and_write": false, 00:15:47.079 "abort": false, 00:15:47.079 "nvme_admin": false, 00:15:47.079 "nvme_io": false 00:15:47.079 }, 00:15:47.079 "driver_specific": { 00:15:47.079 "lvol": { 00:15:47.079 "lvol_store_uuid": "0e8f1e28-e12c-49a9-8346-a3f584cf9225", 00:15:47.079 "base_bdev": "nvme0n1", 00:15:47.079 "thin_provision": true, 00:15:47.079 "snapshot": false, 00:15:47.079 "clone": false, 00:15:47.079 "esnap_clone": false 00:15:47.079 } 00:15:47.079 } 00:15:47.079 } 00:15:47.079 ]' 00:15:47.079 06:40:57 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:47.079 06:40:57 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:47.079 06:40:57 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:47.079 06:40:57 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:47.079 06:40:57 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:47.079 06:40:57 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:47.079 06:40:57 -- ftl/common.sh@48 -- # cache_size=5171 00:15:47.079 06:40:57 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:47.079 06:40:57 -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:15:47.079 06:40:57 -- ftl/trim.sh@46 -- # l2p_percentage=60 00:15:47.079 06:40:57 -- ftl/trim.sh@47 -- # get_bdev_size 67f384db-4941-4cb9-9a90-8de637eabcd2 00:15:47.079 06:40:57 -- common/autotest_common.sh@1367 -- # local bdev_name=67f384db-4941-4cb9-9a90-8de637eabcd2 00:15:47.079 06:40:57 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:47.079 06:40:57 -- common/autotest_common.sh@1369 -- # local bs 00:15:47.079 06:40:57 -- common/autotest_common.sh@1370 -- # local nb 00:15:47.079 06:40:57 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 67f384db-4941-4cb9-9a90-8de637eabcd2 00:15:47.338 06:40:58 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:47.338 { 00:15:47.338 "name": "67f384db-4941-4cb9-9a90-8de637eabcd2", 00:15:47.338 "aliases": [ 00:15:47.338 "lvs/nvme0n1p0" 00:15:47.338 ], 00:15:47.338 "product_name": "Logical Volume", 00:15:47.338 "block_size": 4096, 00:15:47.338 "num_blocks": 26476544, 00:15:47.338 "uuid": "67f384db-4941-4cb9-9a90-8de637eabcd2", 00:15:47.338 "assigned_rate_limits": { 00:15:47.338 "rw_ios_per_sec": 0, 00:15:47.338 "rw_mbytes_per_sec": 0, 00:15:47.338 "r_mbytes_per_sec": 0, 00:15:47.338 "w_mbytes_per_sec": 0 00:15:47.338 }, 00:15:47.338 "claimed": false, 00:15:47.338 "zoned": false, 00:15:47.338 "supported_io_types": { 00:15:47.338 "read": true, 00:15:47.338 "write": true, 00:15:47.338 "unmap": true, 00:15:47.338 "write_zeroes": true, 00:15:47.338 "flush": false, 00:15:47.338 "reset": true, 00:15:47.338 "compare": false, 00:15:47.338 "compare_and_write": false, 00:15:47.338 "abort": false, 00:15:47.338 "nvme_admin": false, 00:15:47.338 "nvme_io": false 00:15:47.338 }, 00:15:47.338 "driver_specific": { 00:15:47.338 "lvol": { 00:15:47.338 "lvol_store_uuid": "0e8f1e28-e12c-49a9-8346-a3f584cf9225", 00:15:47.338 "base_bdev": "nvme0n1", 00:15:47.338 "thin_provision": true, 00:15:47.338 "snapshot": false, 00:15:47.338 "clone": false, 00:15:47.338 "esnap_clone": false 00:15:47.338 } 00:15:47.338 } 00:15:47.338 } 00:15:47.338 ]' 00:15:47.338 06:40:58 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:47.338 06:40:58 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:47.338 06:40:58 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:47.338 06:40:58 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:47.338 06:40:58 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:47.338 06:40:58 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:47.338 06:40:58 -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:15:47.338 06:40:58 -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 67f384db-4941-4cb9-9a90-8de637eabcd2 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:15:47.598 [2024-11-28 06:40:58.272635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.598 [2024-11-28 06:40:58.272678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:47.598 [2024-11-28 06:40:58.272690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:47.598 [2024-11-28 06:40:58.272722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.598 [2024-11-28 06:40:58.274635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.599 [2024-11-28 06:40:58.274666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:47.599 [2024-11-28 06:40:58.274676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.892 ms 00:15:47.599 [2024-11-28 06:40:58.274681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.599 [2024-11-28 06:40:58.274782] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:47.599 [2024-11-28 06:40:58.274982] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:47.599 [2024-11-28 06:40:58.274995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.599 [2024-11-28 06:40:58.275001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:47.599 [2024-11-28 06:40:58.275009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:15:47.599 [2024-11-28 06:40:58.275015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.599 [2024-11-28 06:40:58.275219] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ea43bc93-5675-45fc-ae12-8201ac6e92f9 00:15:47.599 [2024-11-28 06:40:58.276155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.599 [2024-11-28 06:40:58.276271] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:47.599 [2024-11-28 06:40:58.276284] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:15:47.599 [2024-11-28 06:40:58.276292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.599 [2024-11-28 06:40:58.280992] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.599 [2024-11-28 06:40:58.281019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:47.599 [2024-11-28 06:40:58.281034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.642 ms 00:15:47.599 [2024-11-28 06:40:58.281044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.599 [2024-11-28 06:40:58.281139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.599 [2024-11-28 06:40:58.281150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:47.599 [2024-11-28 06:40:58.281156] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:15:47.599 [2024-11-28 06:40:58.281163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.599 [2024-11-28 06:40:58.281189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.599 [2024-11-28 06:40:58.281198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:47.599 [2024-11-28 06:40:58.281204] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:47.599 [2024-11-28 06:40:58.281211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.599 [2024-11-28 06:40:58.281257] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:15:47.599 [2024-11-28 06:40:58.282474] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.599 [2024-11-28 06:40:58.282506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:47.599 [2024-11-28 06:40:58.282515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.228 ms 00:15:47.599 [2024-11-28 06:40:58.282521] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.599 [2024-11-28 06:40:58.282560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.599 [2024-11-28 06:40:58.282567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:47.599 [2024-11-28 06:40:58.282576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:47.599 [2024-11-28 06:40:58.282581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.599 [2024-11-28 06:40:58.282618] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:47.599 [2024-11-28 06:40:58.282720] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:47.599 [2024-11-28 06:40:58.282733] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:47.599 [2024-11-28 06:40:58.282753] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:47.599 [2024-11-28 06:40:58.282762] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:47.599 [2024-11-28 06:40:58.282769] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:47.599 [2024-11-28 06:40:58.282779] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:15:47.599 [2024-11-28 06:40:58.282785] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:47.599 [2024-11-28 06:40:58.282792] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:47.599 [2024-11-28 06:40:58.282798] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:47.599 [2024-11-28 06:40:58.282806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.599 [2024-11-28 06:40:58.282811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:47.599 [2024-11-28 06:40:58.282819] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:15:47.599 [2024-11-28 06:40:58.282832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.599 [2024-11-28 06:40:58.282897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.599 [2024-11-28 06:40:58.282904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:47.599 [2024-11-28 06:40:58.282911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:15:47.599 [2024-11-28 06:40:58.282918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.599 [2024-11-28 06:40:58.282998] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:47.599 [2024-11-28 06:40:58.283005] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:47.599 [2024-11-28 06:40:58.283014] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:47.599 [2024-11-28 06:40:58.283021] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:47.599 [2024-11-28 06:40:58.283028] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:47.599 [2024-11-28 06:40:58.283034] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:47.599 [2024-11-28 06:40:58.283040] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:15:47.599 [2024-11-28 06:40:58.283044] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:47.599 [2024-11-28 06:40:58.283051] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:15:47.599 [2024-11-28 06:40:58.283057] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:47.599 [2024-11-28 06:40:58.283063] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:47.599 [2024-11-28 06:40:58.283068] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:15:47.599 [2024-11-28 06:40:58.283076] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:47.599 [2024-11-28 06:40:58.283082] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:47.599 [2024-11-28 06:40:58.283088] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:15:47.599 [2024-11-28 06:40:58.283093] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:47.599 [2024-11-28 06:40:58.283099] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:47.599 [2024-11-28 06:40:58.283104] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:15:47.599 [2024-11-28 06:40:58.283109] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:47.599 [2024-11-28 06:40:58.283115] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:47.599 [2024-11-28 06:40:58.283122] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:15:47.599 [2024-11-28 06:40:58.283129] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:47.599 [2024-11-28 06:40:58.283137] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:47.599 [2024-11-28 06:40:58.283143] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:15:47.599 [2024-11-28 06:40:58.283150] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:47.599 [2024-11-28 06:40:58.283156] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:47.599 [2024-11-28 06:40:58.283162] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:15:47.600 [2024-11-28 06:40:58.283177] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:47.600 [2024-11-28 06:40:58.283187] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:47.600 [2024-11-28 06:40:58.283192] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:15:47.600 [2024-11-28 06:40:58.283199] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:47.600 [2024-11-28 06:40:58.283205] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:47.600 [2024-11-28 06:40:58.283212] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:15:47.600 [2024-11-28 06:40:58.283218] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:47.600 [2024-11-28 06:40:58.283225] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:47.600 [2024-11-28 06:40:58.283230] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:15:47.600 [2024-11-28 06:40:58.283242] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:47.600 [2024-11-28 06:40:58.283247] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:47.600 [2024-11-28 06:40:58.283254] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:15:47.600 [2024-11-28 06:40:58.283260] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:47.600 [2024-11-28 06:40:58.283267] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:47.600 [2024-11-28 06:40:58.283273] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:47.600 [2024-11-28 06:40:58.283281] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:47.600 [2024-11-28 06:40:58.283287] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:47.600 [2024-11-28 06:40:58.283298] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:47.600 [2024-11-28 06:40:58.283305] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:47.600 [2024-11-28 06:40:58.283311] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:47.600 [2024-11-28 06:40:58.283317] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:47.600 [2024-11-28 06:40:58.283324] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:47.600 [2024-11-28 06:40:58.283330] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:47.600 [2024-11-28 06:40:58.283339] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:47.600 [2024-11-28 06:40:58.283346] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:47.600 [2024-11-28 06:40:58.283356] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:15:47.600 [2024-11-28 06:40:58.283363] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:15:47.600 [2024-11-28 06:40:58.283372] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:15:47.600 [2024-11-28 06:40:58.283378] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:15:47.600 [2024-11-28 06:40:58.283386] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:15:47.600 [2024-11-28 06:40:58.283392] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:15:47.600 [2024-11-28 06:40:58.283399] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:15:47.600 [2024-11-28 06:40:58.283405] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:15:47.600 [2024-11-28 06:40:58.283414] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:15:47.600 [2024-11-28 06:40:58.283420] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:15:47.600 [2024-11-28 06:40:58.283428] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:15:47.600 [2024-11-28 06:40:58.283434] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:15:47.600 [2024-11-28 06:40:58.283442] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:15:47.600 [2024-11-28 06:40:58.283449] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:47.600 [2024-11-28 06:40:58.283457] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:47.600 [2024-11-28 06:40:58.283465] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:47.600 [2024-11-28 06:40:58.283473] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:47.600 [2024-11-28 06:40:58.283479] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:47.600 [2024-11-28 06:40:58.283486] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:47.600 [2024-11-28 06:40:58.283493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.600 [2024-11-28 06:40:58.283500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:47.600 [2024-11-28 06:40:58.283507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:15:47.600 [2024-11-28 06:40:58.283513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.600 [2024-11-28 06:40:58.288837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.600 [2024-11-28 06:40:58.288874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:47.600 [2024-11-28 06:40:58.288889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.263 ms 00:15:47.600 [2024-11-28 06:40:58.288896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.600 [2024-11-28 06:40:58.288985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.600 [2024-11-28 06:40:58.288995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:47.600 [2024-11-28 06:40:58.289001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:15:47.600 [2024-11-28 06:40:58.289007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.600 [2024-11-28 06:40:58.296802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.600 [2024-11-28 06:40:58.296830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:47.600 [2024-11-28 06:40:58.296837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.771 ms 00:15:47.600 [2024-11-28 06:40:58.296844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.600 [2024-11-28 06:40:58.296888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.600 [2024-11-28 06:40:58.296897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:47.600 [2024-11-28 06:40:58.296913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:47.600 [2024-11-28 06:40:58.296921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.600 [2024-11-28 06:40:58.297198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.600 [2024-11-28 06:40:58.297222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:47.600 [2024-11-28 06:40:58.297230] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:15:47.600 [2024-11-28 06:40:58.297237] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.600 [2024-11-28 06:40:58.297337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.600 [2024-11-28 06:40:58.297358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:47.600 [2024-11-28 06:40:58.297364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:15:47.600 [2024-11-28 06:40:58.297371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.600 [2024-11-28 06:40:58.312656] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.600 [2024-11-28 06:40:58.312911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:47.600 [2024-11-28 06:40:58.312943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.260 ms 00:15:47.601 [2024-11-28 06:40:58.312972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.601 [2024-11-28 06:40:58.323141] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:47.601 [2024-11-28 06:40:58.335073] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.601 [2024-11-28 06:40:58.335100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:47.601 [2024-11-28 06:40:58.335112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.923 ms 00:15:47.601 [2024-11-28 06:40:58.335120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.859 [2024-11-28 06:40:58.399468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.859 [2024-11-28 06:40:58.399518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:47.860 [2024-11-28 06:40:58.399538] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.292 ms 00:15:47.860 [2024-11-28 06:40:58.399545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.860 [2024-11-28 06:40:58.399592] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:15:47.860 [2024-11-28 06:40:58.399602] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:15:50.392 [2024-11-28 06:41:00.777701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.392 [2024-11-28 06:41:00.777776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:50.392 [2024-11-28 06:41:00.777796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2378.090 ms 00:15:50.392 [2024-11-28 06:41:00.777804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.392 [2024-11-28 06:41:00.777984] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.392 [2024-11-28 06:41:00.777995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:50.392 [2024-11-28 06:41:00.778006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:15:50.392 [2024-11-28 06:41:00.778014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.392 [2024-11-28 06:41:00.780945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.392 [2024-11-28 06:41:00.780979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:50.392 [2024-11-28 06:41:00.780993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.897 ms 00:15:50.392 [2024-11-28 06:41:00.781003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.392 [2024-11-28 06:41:00.783688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.392 [2024-11-28 06:41:00.783734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:50.392 [2024-11-28 06:41:00.783746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.640 ms 00:15:50.392 [2024-11-28 06:41:00.783754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.392 [2024-11-28 06:41:00.783935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.392 [2024-11-28 06:41:00.783944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:50.392 [2024-11-28 06:41:00.783955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:15:50.392 [2024-11-28 06:41:00.783962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.392 [2024-11-28 06:41:00.806020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.392 [2024-11-28 06:41:00.806051] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:50.392 [2024-11-28 06:41:00.806063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.022 ms 00:15:50.392 [2024-11-28 06:41:00.806072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.392 [2024-11-28 06:41:00.809785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.392 [2024-11-28 06:41:00.809824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:50.392 [2024-11-28 06:41:00.809838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.669 ms 00:15:50.392 [2024-11-28 06:41:00.809845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.392 [2024-11-28 06:41:00.813487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.392 [2024-11-28 06:41:00.813517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:50.392 [2024-11-28 06:41:00.813529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.598 ms 00:15:50.392 [2024-11-28 06:41:00.813538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.392 [2024-11-28 06:41:00.816919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.392 [2024-11-28 06:41:00.816949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:50.392 [2024-11-28 06:41:00.816961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.328 ms 00:15:50.392 [2024-11-28 06:41:00.816968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.392 [2024-11-28 06:41:00.817014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.392 [2024-11-28 06:41:00.817025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:50.392 [2024-11-28 06:41:00.817035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:50.392 [2024-11-28 06:41:00.817053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.392 [2024-11-28 06:41:00.817137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.393 [2024-11-28 06:41:00.817146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:50.393 [2024-11-28 06:41:00.817157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:15:50.393 [2024-11-28 06:41:00.817165] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.393 [2024-11-28 06:41:00.817985] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:50.393 [2024-11-28 06:41:00.818943] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2545.090 ms, result 0 00:15:50.393 [2024-11-28 06:41:00.819770] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:50.393 { 00:15:50.393 "name": "ftl0", 00:15:50.393 "uuid": "ea43bc93-5675-45fc-ae12-8201ac6e92f9" 00:15:50.393 } 00:15:50.393 06:41:00 -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:15:50.393 06:41:00 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:15:50.393 06:41:00 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:50.393 06:41:00 -- common/autotest_common.sh@899 -- # local i 00:15:50.393 06:41:00 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:50.393 06:41:00 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:50.393 06:41:00 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:50.393 06:41:01 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:50.651 [ 00:15:50.651 { 00:15:50.651 "name": "ftl0", 00:15:50.651 "aliases": [ 00:15:50.651 "ea43bc93-5675-45fc-ae12-8201ac6e92f9" 00:15:50.651 ], 00:15:50.651 "product_name": "FTL disk", 00:15:50.651 "block_size": 4096, 00:15:50.651 "num_blocks": 23592960, 00:15:50.651 "uuid": "ea43bc93-5675-45fc-ae12-8201ac6e92f9", 00:15:50.651 "assigned_rate_limits": { 00:15:50.651 "rw_ios_per_sec": 0, 00:15:50.651 "rw_mbytes_per_sec": 0, 00:15:50.651 "r_mbytes_per_sec": 0, 00:15:50.651 "w_mbytes_per_sec": 0 00:15:50.651 }, 00:15:50.651 "claimed": false, 00:15:50.651 "zoned": false, 00:15:50.651 "supported_io_types": { 00:15:50.651 "read": true, 00:15:50.651 "write": true, 00:15:50.651 "unmap": true, 00:15:50.651 "write_zeroes": true, 00:15:50.651 "flush": true, 00:15:50.651 "reset": false, 00:15:50.651 "compare": false, 00:15:50.652 "compare_and_write": false, 00:15:50.652 "abort": false, 00:15:50.652 "nvme_admin": false, 00:15:50.652 "nvme_io": false 00:15:50.652 }, 00:15:50.652 "driver_specific": { 00:15:50.652 "ftl": { 00:15:50.652 "base_bdev": "67f384db-4941-4cb9-9a90-8de637eabcd2", 00:15:50.652 "cache": "nvc0n1p0" 00:15:50.652 } 00:15:50.652 } 00:15:50.652 } 00:15:50.652 ] 00:15:50.652 06:41:01 -- common/autotest_common.sh@905 -- # return 0 00:15:50.652 06:41:01 -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:15:50.652 06:41:01 -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:50.652 06:41:01 -- ftl/trim.sh@56 -- # echo ']}' 00:15:50.652 06:41:01 -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:15:50.910 06:41:01 -- ftl/trim.sh@59 -- # bdev_info='[ 00:15:50.910 { 00:15:50.910 "name": "ftl0", 00:15:50.910 "aliases": [ 00:15:50.910 "ea43bc93-5675-45fc-ae12-8201ac6e92f9" 00:15:50.910 ], 00:15:50.910 "product_name": "FTL disk", 00:15:50.910 "block_size": 4096, 00:15:50.910 "num_blocks": 23592960, 00:15:50.910 "uuid": "ea43bc93-5675-45fc-ae12-8201ac6e92f9", 00:15:50.910 "assigned_rate_limits": { 00:15:50.910 "rw_ios_per_sec": 0, 00:15:50.910 "rw_mbytes_per_sec": 0, 00:15:50.910 "r_mbytes_per_sec": 0, 00:15:50.910 "w_mbytes_per_sec": 0 00:15:50.910 }, 00:15:50.910 "claimed": false, 00:15:50.910 "zoned": false, 00:15:50.910 "supported_io_types": { 00:15:50.910 "read": true, 00:15:50.910 "write": true, 00:15:50.910 "unmap": true, 00:15:50.910 "write_zeroes": true, 00:15:50.910 "flush": true, 00:15:50.910 "reset": false, 00:15:50.910 "compare": false, 00:15:50.910 "compare_and_write": false, 00:15:50.910 "abort": false, 00:15:50.910 "nvme_admin": false, 00:15:50.910 "nvme_io": false 00:15:50.910 }, 00:15:50.910 "driver_specific": { 00:15:50.910 "ftl": { 00:15:50.910 "base_bdev": "67f384db-4941-4cb9-9a90-8de637eabcd2", 00:15:50.910 "cache": "nvc0n1p0" 00:15:50.910 } 00:15:50.911 } 00:15:50.911 } 00:15:50.911 ]' 00:15:50.911 06:41:01 -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:15:50.911 06:41:01 -- ftl/trim.sh@60 -- # nb=23592960 00:15:50.911 06:41:01 -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:51.171 [2024-11-28 06:41:01.772333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.171 [2024-11-28 06:41:01.772381] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:51.171 [2024-11-28 06:41:01.772396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:51.171 [2024-11-28 06:41:01.772406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.171 [2024-11-28 06:41:01.772451] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:15:51.171 [2024-11-28 06:41:01.772885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.171 [2024-11-28 06:41:01.772902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:51.171 [2024-11-28 06:41:01.772913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:15:51.171 [2024-11-28 06:41:01.772921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.171 [2024-11-28 06:41:01.773426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.171 [2024-11-28 06:41:01.773443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:51.172 [2024-11-28 06:41:01.773458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.472 ms 00:15:51.172 [2024-11-28 06:41:01.773466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.172 [2024-11-28 06:41:01.777133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.172 [2024-11-28 06:41:01.777156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:51.172 [2024-11-28 06:41:01.777167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.639 ms 00:15:51.172 [2024-11-28 06:41:01.777176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.172 [2024-11-28 06:41:01.784092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.172 [2024-11-28 06:41:01.784131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:51.172 [2024-11-28 06:41:01.784142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.863 ms 00:15:51.172 [2024-11-28 06:41:01.784150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.172 [2024-11-28 06:41:01.785720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.172 [2024-11-28 06:41:01.785749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:51.172 [2024-11-28 06:41:01.785760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.471 ms 00:15:51.172 [2024-11-28 06:41:01.785768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.172 [2024-11-28 06:41:01.790195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.172 [2024-11-28 06:41:01.790228] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:51.172 [2024-11-28 06:41:01.790241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.376 ms 00:15:51.172 [2024-11-28 06:41:01.790248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.172 [2024-11-28 06:41:01.790426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.172 [2024-11-28 06:41:01.790436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:51.172 [2024-11-28 06:41:01.790449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:15:51.172 [2024-11-28 06:41:01.790456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.172 [2024-11-28 06:41:01.792287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.172 [2024-11-28 06:41:01.792316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:51.172 [2024-11-28 06:41:01.792327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.798 ms 00:15:51.172 [2024-11-28 06:41:01.792335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.172 [2024-11-28 06:41:01.793798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.172 [2024-11-28 06:41:01.793826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:51.172 [2024-11-28 06:41:01.793836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.405 ms 00:15:51.172 [2024-11-28 06:41:01.793843] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.172 [2024-11-28 06:41:01.794821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.172 [2024-11-28 06:41:01.794968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:51.172 [2024-11-28 06:41:01.794987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.935 ms 00:15:51.172 [2024-11-28 06:41:01.794994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.172 [2024-11-28 06:41:01.796161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.172 [2024-11-28 06:41:01.796189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:51.172 [2024-11-28 06:41:01.796200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.076 ms 00:15:51.172 [2024-11-28 06:41:01.796206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.172 [2024-11-28 06:41:01.796251] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:51.172 [2024-11-28 06:41:01.796266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:51.172 [2024-11-28 06:41:01.796756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.796763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.796775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.796783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.796791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.796799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.796808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.796815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.796824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.796832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.796841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.796848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.796857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.796864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.796874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.796882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.796890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.796898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.796909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.796916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.796925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.796932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.796941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.796948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.796957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.796965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.796973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.796980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.796989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.796997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.797006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.797014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.797023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.797030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.797040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.797048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.797057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.797064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.797073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.797081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.797090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.797097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.797117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.797125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.797134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:51.173 [2024-11-28 06:41:01.797149] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:51.173 [2024-11-28 06:41:01.797158] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ea43bc93-5675-45fc-ae12-8201ac6e92f9 00:15:51.173 [2024-11-28 06:41:01.797166] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:51.173 [2024-11-28 06:41:01.797175] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:51.173 [2024-11-28 06:41:01.797185] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:51.173 [2024-11-28 06:41:01.797196] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:51.173 [2024-11-28 06:41:01.797203] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:51.173 [2024-11-28 06:41:01.797212] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:51.173 [2024-11-28 06:41:01.797219] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:51.173 [2024-11-28 06:41:01.797228] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:51.173 [2024-11-28 06:41:01.797234] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:51.173 [2024-11-28 06:41:01.797243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.173 [2024-11-28 06:41:01.797252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:51.173 [2024-11-28 06:41:01.797262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.994 ms 00:15:51.173 [2024-11-28 06:41:01.797269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.173 [2024-11-28 06:41:01.798683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.173 [2024-11-28 06:41:01.798699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:51.173 [2024-11-28 06:41:01.798987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.361 ms 00:15:51.173 [2024-11-28 06:41:01.799010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.173 [2024-11-28 06:41:01.799093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.173 [2024-11-28 06:41:01.799185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:51.173 [2024-11-28 06:41:01.799214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:15:51.173 [2024-11-28 06:41:01.799233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.173 [2024-11-28 06:41:01.804211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.173 [2024-11-28 06:41:01.804326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:51.173 [2024-11-28 06:41:01.804383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.173 [2024-11-28 06:41:01.804407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.173 [2024-11-28 06:41:01.804519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.173 [2024-11-28 06:41:01.804547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:51.173 [2024-11-28 06:41:01.804600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.173 [2024-11-28 06:41:01.804623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.173 [2024-11-28 06:41:01.804739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.173 [2024-11-28 06:41:01.804773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:51.173 [2024-11-28 06:41:01.804796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.173 [2024-11-28 06:41:01.804846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.173 [2024-11-28 06:41:01.804919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.173 [2024-11-28 06:41:01.804944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:51.173 [2024-11-28 06:41:01.804992] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.173 [2024-11-28 06:41:01.805046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.173 [2024-11-28 06:41:01.813985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.173 [2024-11-28 06:41:01.814130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:51.173 [2024-11-28 06:41:01.814185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.173 [2024-11-28 06:41:01.814281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.173 [2024-11-28 06:41:01.817842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.173 [2024-11-28 06:41:01.817952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:51.173 [2024-11-28 06:41:01.818024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.173 [2024-11-28 06:41:01.818048] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.173 [2024-11-28 06:41:01.818096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.173 [2024-11-28 06:41:01.818150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:51.173 [2024-11-28 06:41:01.818176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.173 [2024-11-28 06:41:01.818196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.173 [2024-11-28 06:41:01.818261] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.173 [2024-11-28 06:41:01.818289] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:51.173 [2024-11-28 06:41:01.818317] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.174 [2024-11-28 06:41:01.818336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.174 [2024-11-28 06:41:01.818483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.174 [2024-11-28 06:41:01.818548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:51.174 [2024-11-28 06:41:01.818595] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.174 [2024-11-28 06:41:01.818617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.174 [2024-11-28 06:41:01.818714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.174 [2024-11-28 06:41:01.818776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:51.174 [2024-11-28 06:41:01.818830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.174 [2024-11-28 06:41:01.818874] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.174 [2024-11-28 06:41:01.818935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.174 [2024-11-28 06:41:01.818983] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:51.174 [2024-11-28 06:41:01.819008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.174 [2024-11-28 06:41:01.819054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.174 [2024-11-28 06:41:01.819125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.174 [2024-11-28 06:41:01.819205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:51.174 [2024-11-28 06:41:01.819237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.174 [2024-11-28 06:41:01.819256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.174 [2024-11-28 06:41:01.819440] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 47.083 ms, result 0 00:15:51.174 true 00:15:51.174 06:41:01 -- ftl/trim.sh@63 -- # killprocess 82821 00:15:51.174 06:41:01 -- common/autotest_common.sh@936 -- # '[' -z 82821 ']' 00:15:51.174 06:41:01 -- common/autotest_common.sh@940 -- # kill -0 82821 00:15:51.174 06:41:01 -- common/autotest_common.sh@941 -- # uname 00:15:51.174 06:41:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:51.174 06:41:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82821 00:15:51.174 killing process with pid 82821 00:15:51.174 06:41:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:51.174 06:41:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:51.174 06:41:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82821' 00:15:51.174 06:41:01 -- common/autotest_common.sh@955 -- # kill 82821 00:15:51.174 06:41:01 -- common/autotest_common.sh@960 -- # wait 82821 00:15:56.440 06:41:06 -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:15:57.007 65536+0 records in 00:15:57.007 65536+0 records out 00:15:57.007 268435456 bytes (268 MB, 256 MiB) copied, 0.79477 s, 338 MB/s 00:15:57.007 06:41:07 -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:57.007 [2024-11-28 06:41:07.681663] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:57.007 [2024-11-28 06:41:07.681792] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82970 ] 00:15:57.265 [2024-11-28 06:41:07.817061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.265 [2024-11-28 06:41:07.846619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.265 [2024-11-28 06:41:07.928068] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:57.265 [2024-11-28 06:41:07.928139] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:57.525 [2024-11-28 06:41:08.073619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.525 [2024-11-28 06:41:08.073666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:57.525 [2024-11-28 06:41:08.073679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:57.525 [2024-11-28 06:41:08.073688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.525 [2024-11-28 06:41:08.075882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.525 [2024-11-28 06:41:08.075917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:57.525 [2024-11-28 06:41:08.075927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.157 ms 00:15:57.525 [2024-11-28 06:41:08.075938] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.525 [2024-11-28 06:41:08.076007] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:57.525 [2024-11-28 06:41:08.076235] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:57.525 [2024-11-28 06:41:08.076249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.525 [2024-11-28 06:41:08.076256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:57.525 [2024-11-28 06:41:08.076268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:15:57.525 [2024-11-28 06:41:08.076275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.525 [2024-11-28 06:41:08.077303] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:15:57.525 [2024-11-28 06:41:08.079546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.525 [2024-11-28 06:41:08.079582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:15:57.525 [2024-11-28 06:41:08.079592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.246 ms 00:15:57.525 [2024-11-28 06:41:08.079599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.525 [2024-11-28 06:41:08.079652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.525 [2024-11-28 06:41:08.079663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:15:57.525 [2024-11-28 06:41:08.079671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:15:57.525 [2024-11-28 06:41:08.079680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.525 [2024-11-28 06:41:08.084176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.525 [2024-11-28 06:41:08.084322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:57.525 [2024-11-28 06:41:08.084336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.430 ms 00:15:57.525 [2024-11-28 06:41:08.084344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.525 [2024-11-28 06:41:08.084445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.525 [2024-11-28 06:41:08.084459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:57.525 [2024-11-28 06:41:08.084470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:15:57.525 [2024-11-28 06:41:08.084480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.525 [2024-11-28 06:41:08.084508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.525 [2024-11-28 06:41:08.084516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:57.525 [2024-11-28 06:41:08.084524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:57.525 [2024-11-28 06:41:08.084530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.525 [2024-11-28 06:41:08.084555] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:15:57.526 [2024-11-28 06:41:08.085815] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.526 [2024-11-28 06:41:08.085851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:57.526 [2024-11-28 06:41:08.085862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.268 ms 00:15:57.526 [2024-11-28 06:41:08.085870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.526 [2024-11-28 06:41:08.085905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.526 [2024-11-28 06:41:08.085917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:57.526 [2024-11-28 06:41:08.085924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:15:57.526 [2024-11-28 06:41:08.085932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.526 [2024-11-28 06:41:08.085950] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:15:57.526 [2024-11-28 06:41:08.085967] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:15:57.526 [2024-11-28 06:41:08.086000] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:15:57.526 [2024-11-28 06:41:08.086019] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:15:57.526 [2024-11-28 06:41:08.086090] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:57.526 [2024-11-28 06:41:08.086103] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:57.526 [2024-11-28 06:41:08.086114] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:57.526 [2024-11-28 06:41:08.086127] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:57.526 [2024-11-28 06:41:08.086136] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:57.526 [2024-11-28 06:41:08.086143] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:15:57.526 [2024-11-28 06:41:08.086150] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:57.526 [2024-11-28 06:41:08.086160] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:57.526 [2024-11-28 06:41:08.086168] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:57.526 [2024-11-28 06:41:08.086175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.526 [2024-11-28 06:41:08.086181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:57.526 [2024-11-28 06:41:08.086189] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:15:57.526 [2024-11-28 06:41:08.086195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.526 [2024-11-28 06:41:08.086258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.526 [2024-11-28 06:41:08.086267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:57.526 [2024-11-28 06:41:08.086276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:15:57.526 [2024-11-28 06:41:08.086282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.526 [2024-11-28 06:41:08.086355] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:57.526 [2024-11-28 06:41:08.086365] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:57.526 [2024-11-28 06:41:08.086372] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:57.526 [2024-11-28 06:41:08.086379] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:57.526 [2024-11-28 06:41:08.086391] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:57.526 [2024-11-28 06:41:08.086397] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:57.526 [2024-11-28 06:41:08.086404] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:15:57.526 [2024-11-28 06:41:08.086411] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:57.526 [2024-11-28 06:41:08.086420] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:15:57.526 [2024-11-28 06:41:08.086427] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:57.526 [2024-11-28 06:41:08.086434] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:57.526 [2024-11-28 06:41:08.086440] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:15:57.526 [2024-11-28 06:41:08.086447] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:57.526 [2024-11-28 06:41:08.086453] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:57.526 [2024-11-28 06:41:08.086460] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:15:57.526 [2024-11-28 06:41:08.086468] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:57.526 [2024-11-28 06:41:08.086480] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:57.526 [2024-11-28 06:41:08.086487] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:15:57.526 [2024-11-28 06:41:08.086496] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:57.526 [2024-11-28 06:41:08.086503] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:57.526 [2024-11-28 06:41:08.086511] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:15:57.526 [2024-11-28 06:41:08.086519] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:57.526 [2024-11-28 06:41:08.086529] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:57.526 [2024-11-28 06:41:08.086538] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:15:57.526 [2024-11-28 06:41:08.086545] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:57.526 [2024-11-28 06:41:08.086552] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:57.526 [2024-11-28 06:41:08.086560] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:15:57.526 [2024-11-28 06:41:08.086567] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:57.526 [2024-11-28 06:41:08.086575] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:57.526 [2024-11-28 06:41:08.086583] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:15:57.526 [2024-11-28 06:41:08.086590] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:57.526 [2024-11-28 06:41:08.086597] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:57.526 [2024-11-28 06:41:08.086608] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:15:57.526 [2024-11-28 06:41:08.086616] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:57.526 [2024-11-28 06:41:08.086623] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:57.526 [2024-11-28 06:41:08.086631] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:15:57.526 [2024-11-28 06:41:08.086638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:57.526 [2024-11-28 06:41:08.086645] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:57.526 [2024-11-28 06:41:08.086653] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:15:57.526 [2024-11-28 06:41:08.086660] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:57.526 [2024-11-28 06:41:08.086668] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:57.526 [2024-11-28 06:41:08.086676] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:57.526 [2024-11-28 06:41:08.086684] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:57.526 [2024-11-28 06:41:08.086692] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:57.526 [2024-11-28 06:41:08.086700] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:57.526 [2024-11-28 06:41:08.086725] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:57.526 [2024-11-28 06:41:08.086733] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:57.526 [2024-11-28 06:41:08.086741] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:57.526 [2024-11-28 06:41:08.086750] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:57.526 [2024-11-28 06:41:08.086758] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:57.526 [2024-11-28 06:41:08.086767] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:57.526 [2024-11-28 06:41:08.086787] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:57.526 [2024-11-28 06:41:08.086799] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:15:57.526 [2024-11-28 06:41:08.086808] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:15:57.526 [2024-11-28 06:41:08.086816] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:15:57.526 [2024-11-28 06:41:08.086825] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:15:57.526 [2024-11-28 06:41:08.086832] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:15:57.526 [2024-11-28 06:41:08.086840] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:15:57.526 [2024-11-28 06:41:08.086847] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:15:57.526 [2024-11-28 06:41:08.086855] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:15:57.526 [2024-11-28 06:41:08.086862] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:15:57.526 [2024-11-28 06:41:08.086868] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:15:57.526 [2024-11-28 06:41:08.086875] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:15:57.526 [2024-11-28 06:41:08.086882] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:15:57.526 [2024-11-28 06:41:08.086891] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:15:57.526 [2024-11-28 06:41:08.086898] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:57.526 [2024-11-28 06:41:08.086906] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:57.526 [2024-11-28 06:41:08.086914] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:57.526 [2024-11-28 06:41:08.086921] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:57.526 [2024-11-28 06:41:08.086928] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:57.527 [2024-11-28 06:41:08.086935] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:57.527 [2024-11-28 06:41:08.086942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.527 [2024-11-28 06:41:08.086954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:57.527 [2024-11-28 06:41:08.086962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.631 ms 00:15:57.527 [2024-11-28 06:41:08.086968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.527 [2024-11-28 06:41:08.092601] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.527 [2024-11-28 06:41:08.092630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:57.527 [2024-11-28 06:41:08.092640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.591 ms 00:15:57.527 [2024-11-28 06:41:08.092654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.527 [2024-11-28 06:41:08.092782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.527 [2024-11-28 06:41:08.092794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:57.527 [2024-11-28 06:41:08.092802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:15:57.527 [2024-11-28 06:41:08.092809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.527 [2024-11-28 06:41:08.109130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.527 [2024-11-28 06:41:08.109179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:57.527 [2024-11-28 06:41:08.109197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.298 ms 00:15:57.527 [2024-11-28 06:41:08.109214] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.527 [2024-11-28 06:41:08.109307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.527 [2024-11-28 06:41:08.109323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:57.527 [2024-11-28 06:41:08.109336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:57.527 [2024-11-28 06:41:08.109346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.527 [2024-11-28 06:41:08.109693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.527 [2024-11-28 06:41:08.109749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:57.527 [2024-11-28 06:41:08.109764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:15:57.527 [2024-11-28 06:41:08.109776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.527 [2024-11-28 06:41:08.109951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.527 [2024-11-28 06:41:08.109966] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:57.527 [2024-11-28 06:41:08.109979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:15:57.527 [2024-11-28 06:41:08.109991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.527 [2024-11-28 06:41:08.115954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.527 [2024-11-28 06:41:08.115997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:57.527 [2024-11-28 06:41:08.116010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.932 ms 00:15:57.527 [2024-11-28 06:41:08.116021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.527 [2024-11-28 06:41:08.118680] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:15:57.527 [2024-11-28 06:41:08.118743] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:15:57.527 [2024-11-28 06:41:08.118758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.527 [2024-11-28 06:41:08.118769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:15:57.527 [2024-11-28 06:41:08.118781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.623 ms 00:15:57.527 [2024-11-28 06:41:08.118791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.527 [2024-11-28 06:41:08.133482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.527 [2024-11-28 06:41:08.133512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:15:57.527 [2024-11-28 06:41:08.133529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.635 ms 00:15:57.527 [2024-11-28 06:41:08.133536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.527 [2024-11-28 06:41:08.135393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.527 [2024-11-28 06:41:08.135527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:15:57.527 [2024-11-28 06:41:08.135543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.793 ms 00:15:57.527 [2024-11-28 06:41:08.135552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.527 [2024-11-28 06:41:08.136993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.527 [2024-11-28 06:41:08.137020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:15:57.527 [2024-11-28 06:41:08.137028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.403 ms 00:15:57.527 [2024-11-28 06:41:08.137035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.527 [2024-11-28 06:41:08.137240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.527 [2024-11-28 06:41:08.137251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:57.527 [2024-11-28 06:41:08.137260] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:15:57.527 [2024-11-28 06:41:08.137266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.527 [2024-11-28 06:41:08.154561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.527 [2024-11-28 06:41:08.154738] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:15:57.527 [2024-11-28 06:41:08.154755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.275 ms 00:15:57.527 [2024-11-28 06:41:08.154763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.527 [2024-11-28 06:41:08.161995] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:57.527 [2024-11-28 06:41:08.175443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.527 [2024-11-28 06:41:08.175477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:57.527 [2024-11-28 06:41:08.175494] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.612 ms 00:15:57.527 [2024-11-28 06:41:08.175502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.527 [2024-11-28 06:41:08.175570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.527 [2024-11-28 06:41:08.175579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:15:57.527 [2024-11-28 06:41:08.175590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:57.527 [2024-11-28 06:41:08.175598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.527 [2024-11-28 06:41:08.175642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.527 [2024-11-28 06:41:08.175651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:57.527 [2024-11-28 06:41:08.175662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:15:57.527 [2024-11-28 06:41:08.175670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.527 [2024-11-28 06:41:08.176880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.527 [2024-11-28 06:41:08.176997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:57.527 [2024-11-28 06:41:08.177011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.189 ms 00:15:57.527 [2024-11-28 06:41:08.177022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.527 [2024-11-28 06:41:08.177057] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.527 [2024-11-28 06:41:08.177071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:57.527 [2024-11-28 06:41:08.177082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:57.527 [2024-11-28 06:41:08.177089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.527 [2024-11-28 06:41:08.177122] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:15:57.527 [2024-11-28 06:41:08.177131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.527 [2024-11-28 06:41:08.177139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:15:57.527 [2024-11-28 06:41:08.177146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:15:57.527 [2024-11-28 06:41:08.177153] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.527 [2024-11-28 06:41:08.180412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.527 [2024-11-28 06:41:08.180542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:57.527 [2024-11-28 06:41:08.180556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.233 ms 00:15:57.527 [2024-11-28 06:41:08.180564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.527 [2024-11-28 06:41:08.180627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.527 [2024-11-28 06:41:08.180637] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:57.527 [2024-11-28 06:41:08.180645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:15:57.527 [2024-11-28 06:41:08.180654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.527 [2024-11-28 06:41:08.181385] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:57.527 [2024-11-28 06:41:08.182338] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 107.514 ms, result 0 00:15:57.527 [2024-11-28 06:41:08.182930] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:57.527 [2024-11-28 06:41:08.192305] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:58.461  [2024-11-28T06:41:10.605Z] Copying: 41/256 [MB] (41 MBps) [2024-11-28T06:41:11.539Z] Copying: 82/256 [MB] (40 MBps) [2024-11-28T06:41:12.473Z] Copying: 123/256 [MB] (41 MBps) [2024-11-28T06:41:13.408Z] Copying: 164/256 [MB] (40 MBps) [2024-11-28T06:41:14.344Z] Copying: 206/256 [MB] (42 MBps) [2024-11-28T06:41:14.344Z] Copying: 251/256 [MB] (44 MBps) [2024-11-28T06:41:14.344Z] Copying: 256/256 [MB] (average 41 MBps)[2024-11-28 06:41:14.297015] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:03.574 [2024-11-28 06:41:14.298099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.574 [2024-11-28 06:41:14.298136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:03.574 [2024-11-28 06:41:14.298149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:03.574 [2024-11-28 06:41:14.298163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.574 [2024-11-28 06:41:14.298188] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:03.574 [2024-11-28 06:41:14.298579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.574 [2024-11-28 06:41:14.298606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:03.574 [2024-11-28 06:41:14.298616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:16:03.574 [2024-11-28 06:41:14.298624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.574 [2024-11-28 06:41:14.300263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.574 [2024-11-28 06:41:14.300431] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:03.574 [2024-11-28 06:41:14.300448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.618 ms 00:16:03.574 [2024-11-28 06:41:14.300463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.574 [2024-11-28 06:41:14.306739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.574 [2024-11-28 06:41:14.306851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:03.574 [2024-11-28 06:41:14.306867] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.256 ms 00:16:03.574 [2024-11-28 06:41:14.306880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.574 [2024-11-28 06:41:14.313770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.574 [2024-11-28 06:41:14.313871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:03.574 [2024-11-28 06:41:14.313885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.837 ms 00:16:03.574 [2024-11-28 06:41:14.313894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.574 [2024-11-28 06:41:14.315561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.574 [2024-11-28 06:41:14.315591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:03.574 [2024-11-28 06:41:14.315600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.621 ms 00:16:03.574 [2024-11-28 06:41:14.315607] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.574 [2024-11-28 06:41:14.319532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.574 [2024-11-28 06:41:14.319569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:03.574 [2024-11-28 06:41:14.319578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.894 ms 00:16:03.574 [2024-11-28 06:41:14.319586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.574 [2024-11-28 06:41:14.319716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.574 [2024-11-28 06:41:14.319732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:03.574 [2024-11-28 06:41:14.319741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:16:03.574 [2024-11-28 06:41:14.319748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.574 [2024-11-28 06:41:14.322522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.574 [2024-11-28 06:41:14.322552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:03.574 [2024-11-28 06:41:14.322560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.756 ms 00:16:03.574 [2024-11-28 06:41:14.322567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.574 [2024-11-28 06:41:14.324320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.574 [2024-11-28 06:41:14.324349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:03.574 [2024-11-28 06:41:14.324357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.722 ms 00:16:03.574 [2024-11-28 06:41:14.324364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.574 [2024-11-28 06:41:14.325514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.574 [2024-11-28 06:41:14.325630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:03.574 [2024-11-28 06:41:14.325644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.120 ms 00:16:03.574 [2024-11-28 06:41:14.325651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.574 [2024-11-28 06:41:14.326941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.574 [2024-11-28 06:41:14.326965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:03.574 [2024-11-28 06:41:14.326974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.216 ms 00:16:03.574 [2024-11-28 06:41:14.326981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.574 [2024-11-28 06:41:14.327022] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:03.574 [2024-11-28 06:41:14.327038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:03.574 [2024-11-28 06:41:14.327053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:03.574 [2024-11-28 06:41:14.327061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:03.574 [2024-11-28 06:41:14.327070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:03.574 [2024-11-28 06:41:14.327077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:03.574 [2024-11-28 06:41:14.327085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:03.574 [2024-11-28 06:41:14.327092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:03.574 [2024-11-28 06:41:14.327100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:03.574 [2024-11-28 06:41:14.327108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:03.574 [2024-11-28 06:41:14.327115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:03.574 [2024-11-28 06:41:14.327123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:03.574 [2024-11-28 06:41:14.327130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:03.574 [2024-11-28 06:41:14.327137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:03.575 [2024-11-28 06:41:14.327825] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:03.575 [2024-11-28 06:41:14.327832] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ea43bc93-5675-45fc-ae12-8201ac6e92f9 00:16:03.575 [2024-11-28 06:41:14.327840] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:03.576 [2024-11-28 06:41:14.327847] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:03.576 [2024-11-28 06:41:14.327853] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:03.576 [2024-11-28 06:41:14.327861] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:03.576 [2024-11-28 06:41:14.327873] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:03.576 [2024-11-28 06:41:14.327880] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:03.576 [2024-11-28 06:41:14.327890] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:03.576 [2024-11-28 06:41:14.327896] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:03.576 [2024-11-28 06:41:14.327902] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:03.576 [2024-11-28 06:41:14.327910] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.576 [2024-11-28 06:41:14.327917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:03.576 [2024-11-28 06:41:14.327925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.889 ms 00:16:03.576 [2024-11-28 06:41:14.327936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.576 [2024-11-28 06:41:14.329520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.576 [2024-11-28 06:41:14.329560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:03.576 [2024-11-28 06:41:14.329581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.566 ms 00:16:03.576 [2024-11-28 06:41:14.329650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.576 [2024-11-28 06:41:14.329735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.576 [2024-11-28 06:41:14.329795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:03.576 [2024-11-28 06:41:14.329823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:03.576 [2024-11-28 06:41:14.329842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.576 [2024-11-28 06:41:14.334789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:03.576 [2024-11-28 06:41:14.334897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:03.576 [2024-11-28 06:41:14.334948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:03.576 [2024-11-28 06:41:14.334987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.576 [2024-11-28 06:41:14.335067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:03.576 [2024-11-28 06:41:14.335142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:03.576 [2024-11-28 06:41:14.335280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:03.576 [2024-11-28 06:41:14.335305] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.576 [2024-11-28 06:41:14.335386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:03.576 [2024-11-28 06:41:14.335442] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:03.576 [2024-11-28 06:41:14.335484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:03.576 [2024-11-28 06:41:14.335507] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.576 [2024-11-28 06:41:14.335543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:03.576 [2024-11-28 06:41:14.335564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:03.576 [2024-11-28 06:41:14.335583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:03.576 [2024-11-28 06:41:14.335628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.835 [2024-11-28 06:41:14.344162] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:03.835 [2024-11-28 06:41:14.344272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:03.835 [2024-11-28 06:41:14.344319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:03.835 [2024-11-28 06:41:14.344340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.835 [2024-11-28 06:41:14.347954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:03.835 [2024-11-28 06:41:14.348054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:03.835 [2024-11-28 06:41:14.348112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:03.835 [2024-11-28 06:41:14.348141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.835 [2024-11-28 06:41:14.348251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:03.835 [2024-11-28 06:41:14.348276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:03.835 [2024-11-28 06:41:14.348335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:03.835 [2024-11-28 06:41:14.348358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.835 [2024-11-28 06:41:14.348413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:03.835 [2024-11-28 06:41:14.348446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:03.835 [2024-11-28 06:41:14.348470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:03.835 [2024-11-28 06:41:14.348518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.835 [2024-11-28 06:41:14.348607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:03.835 [2024-11-28 06:41:14.348638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:03.835 [2024-11-28 06:41:14.348694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:03.835 [2024-11-28 06:41:14.348734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.835 [2024-11-28 06:41:14.348788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:03.835 [2024-11-28 06:41:14.348842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:03.835 [2024-11-28 06:41:14.348871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:03.835 [2024-11-28 06:41:14.348891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.835 [2024-11-28 06:41:14.348975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:03.835 [2024-11-28 06:41:14.349026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:03.835 [2024-11-28 06:41:14.349077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:03.835 [2024-11-28 06:41:14.349100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.835 [2024-11-28 06:41:14.349209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:03.835 [2024-11-28 06:41:14.349278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:03.835 [2024-11-28 06:41:14.349319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:03.835 [2024-11-28 06:41:14.349341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.835 [2024-11-28 06:41:14.349492] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 51.376 ms, result 0 00:16:04.093 00:16:04.093 00:16:04.093 06:41:14 -- ftl/trim.sh@72 -- # svcpid=83050 00:16:04.093 06:41:14 -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:04.093 06:41:14 -- ftl/trim.sh@73 -- # waitforlisten 83050 00:16:04.093 06:41:14 -- common/autotest_common.sh@829 -- # '[' -z 83050 ']' 00:16:04.093 06:41:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.093 06:41:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:04.093 06:41:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.093 06:41:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:04.093 06:41:14 -- common/autotest_common.sh@10 -- # set +x 00:16:04.093 [2024-11-28 06:41:14.848470] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:04.093 [2024-11-28 06:41:14.848719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83050 ] 00:16:04.352 [2024-11-28 06:41:14.983409] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.352 [2024-11-28 06:41:15.014239] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:04.352 [2024-11-28 06:41:15.014422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.917 06:41:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:04.917 06:41:15 -- common/autotest_common.sh@862 -- # return 0 00:16:04.917 06:41:15 -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:05.176 [2024-11-28 06:41:15.860341] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:05.176 [2024-11-28 06:41:15.860398] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:05.436 [2024-11-28 06:41:16.021701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.436 [2024-11-28 06:41:16.021752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:05.436 [2024-11-28 06:41:16.021766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:05.436 [2024-11-28 06:41:16.021774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.436 [2024-11-28 06:41:16.023944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.436 [2024-11-28 06:41:16.024110] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:05.436 [2024-11-28 06:41:16.024130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.148 ms 00:16:05.436 [2024-11-28 06:41:16.024138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.436 [2024-11-28 06:41:16.024213] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:05.436 [2024-11-28 06:41:16.024448] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:05.436 [2024-11-28 06:41:16.024465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.436 [2024-11-28 06:41:16.024473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:05.436 [2024-11-28 06:41:16.024485] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:16:05.436 [2024-11-28 06:41:16.024492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.436 [2024-11-28 06:41:16.025597] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:05.436 [2024-11-28 06:41:16.027825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.436 [2024-11-28 06:41:16.027862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:05.436 [2024-11-28 06:41:16.027878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.233 ms 00:16:05.436 [2024-11-28 06:41:16.027888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.436 [2024-11-28 06:41:16.027942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.436 [2024-11-28 06:41:16.027956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:05.436 [2024-11-28 06:41:16.027966] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:16:05.436 [2024-11-28 06:41:16.027975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.436 [2024-11-28 06:41:16.032718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.436 [2024-11-28 06:41:16.032747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:05.436 [2024-11-28 06:41:16.032756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.693 ms 00:16:05.436 [2024-11-28 06:41:16.032765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.436 [2024-11-28 06:41:16.032862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.436 [2024-11-28 06:41:16.032876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:05.436 [2024-11-28 06:41:16.032885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:16:05.436 [2024-11-28 06:41:16.032898] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.436 [2024-11-28 06:41:16.032921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.437 [2024-11-28 06:41:16.032930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:05.437 [2024-11-28 06:41:16.032940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:05.437 [2024-11-28 06:41:16.032949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.437 [2024-11-28 06:41:16.032972] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:05.437 [2024-11-28 06:41:16.034273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.437 [2024-11-28 06:41:16.034299] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:05.437 [2024-11-28 06:41:16.034310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.304 ms 00:16:05.437 [2024-11-28 06:41:16.034318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.437 [2024-11-28 06:41:16.034364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.437 [2024-11-28 06:41:16.034376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:05.437 [2024-11-28 06:41:16.034389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:05.437 [2024-11-28 06:41:16.034397] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.437 [2024-11-28 06:41:16.034422] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:05.437 [2024-11-28 06:41:16.034443] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:05.437 [2024-11-28 06:41:16.034484] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:05.437 [2024-11-28 06:41:16.034499] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:05.437 [2024-11-28 06:41:16.034573] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:05.437 [2024-11-28 06:41:16.034584] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:05.437 [2024-11-28 06:41:16.034595] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:05.437 [2024-11-28 06:41:16.034608] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:05.437 [2024-11-28 06:41:16.034620] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:05.437 [2024-11-28 06:41:16.034629] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:05.437 [2024-11-28 06:41:16.034640] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:05.437 [2024-11-28 06:41:16.034647] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:05.437 [2024-11-28 06:41:16.034659] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:05.437 [2024-11-28 06:41:16.034667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.437 [2024-11-28 06:41:16.034676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:05.437 [2024-11-28 06:41:16.034684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:16:05.437 [2024-11-28 06:41:16.034693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.437 [2024-11-28 06:41:16.034780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.437 [2024-11-28 06:41:16.034791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:05.437 [2024-11-28 06:41:16.034803] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:16:05.437 [2024-11-28 06:41:16.034814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.437 [2024-11-28 06:41:16.034904] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:05.437 [2024-11-28 06:41:16.034916] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:05.437 [2024-11-28 06:41:16.034929] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:05.437 [2024-11-28 06:41:16.034941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:05.437 [2024-11-28 06:41:16.034951] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:05.437 [2024-11-28 06:41:16.034960] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:05.437 [2024-11-28 06:41:16.034969] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:05.437 [2024-11-28 06:41:16.034978] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:05.437 [2024-11-28 06:41:16.034987] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:05.437 [2024-11-28 06:41:16.034997] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:05.437 [2024-11-28 06:41:16.035004] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:05.437 [2024-11-28 06:41:16.035014] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:05.437 [2024-11-28 06:41:16.035022] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:05.437 [2024-11-28 06:41:16.035031] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:05.437 [2024-11-28 06:41:16.035038] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:05.437 [2024-11-28 06:41:16.035050] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:05.437 [2024-11-28 06:41:16.035058] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:05.437 [2024-11-28 06:41:16.035067] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:05.437 [2024-11-28 06:41:16.035075] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:05.437 [2024-11-28 06:41:16.035085] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:05.437 [2024-11-28 06:41:16.035094] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:05.437 [2024-11-28 06:41:16.035103] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:05.437 [2024-11-28 06:41:16.035111] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:05.437 [2024-11-28 06:41:16.035121] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:05.437 [2024-11-28 06:41:16.035129] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:05.437 [2024-11-28 06:41:16.035138] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:05.437 [2024-11-28 06:41:16.035146] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:05.437 [2024-11-28 06:41:16.035156] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:05.437 [2024-11-28 06:41:16.035168] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:05.437 [2024-11-28 06:41:16.035178] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:05.437 [2024-11-28 06:41:16.035187] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:05.437 [2024-11-28 06:41:16.035197] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:05.437 [2024-11-28 06:41:16.035208] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:05.437 [2024-11-28 06:41:16.035217] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:05.437 [2024-11-28 06:41:16.035228] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:05.437 [2024-11-28 06:41:16.035239] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:05.437 [2024-11-28 06:41:16.035247] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:05.437 [2024-11-28 06:41:16.035257] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:05.437 [2024-11-28 06:41:16.035265] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:05.437 [2024-11-28 06:41:16.035273] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:05.437 [2024-11-28 06:41:16.035281] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:05.437 [2024-11-28 06:41:16.035292] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:05.437 [2024-11-28 06:41:16.035301] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:05.437 [2024-11-28 06:41:16.035313] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:05.437 [2024-11-28 06:41:16.035328] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:05.437 [2024-11-28 06:41:16.035338] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:05.437 [2024-11-28 06:41:16.035345] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:05.437 [2024-11-28 06:41:16.035355] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:05.437 [2024-11-28 06:41:16.035366] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:05.437 [2024-11-28 06:41:16.035375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:05.437 [2024-11-28 06:41:16.035385] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:05.437 [2024-11-28 06:41:16.035401] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:05.437 [2024-11-28 06:41:16.035410] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:05.437 [2024-11-28 06:41:16.035418] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:05.437 [2024-11-28 06:41:16.035426] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:05.437 [2024-11-28 06:41:16.035434] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:05.437 [2024-11-28 06:41:16.035441] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:05.437 [2024-11-28 06:41:16.035449] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:05.437 [2024-11-28 06:41:16.035457] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:05.437 [2024-11-28 06:41:16.035466] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:05.437 [2024-11-28 06:41:16.035473] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:05.437 [2024-11-28 06:41:16.035483] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:05.437 [2024-11-28 06:41:16.035490] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:05.437 [2024-11-28 06:41:16.035499] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:05.437 [2024-11-28 06:41:16.035507] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:05.437 [2024-11-28 06:41:16.035515] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:05.438 [2024-11-28 06:41:16.035522] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:05.438 [2024-11-28 06:41:16.035534] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:05.438 [2024-11-28 06:41:16.035541] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:05.438 [2024-11-28 06:41:16.035550] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:05.438 [2024-11-28 06:41:16.035557] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:05.438 [2024-11-28 06:41:16.035566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.438 [2024-11-28 06:41:16.035576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:05.438 [2024-11-28 06:41:16.035585] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:16:05.438 [2024-11-28 06:41:16.035592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.438 [2024-11-28 06:41:16.041509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.438 [2024-11-28 06:41:16.041540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:05.438 [2024-11-28 06:41:16.041553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.867 ms 00:16:05.438 [2024-11-28 06:41:16.041560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.438 [2024-11-28 06:41:16.041663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.438 [2024-11-28 06:41:16.041674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:05.438 [2024-11-28 06:41:16.041687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:16:05.438 [2024-11-28 06:41:16.041695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.438 [2024-11-28 06:41:16.050738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.438 [2024-11-28 06:41:16.050767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:05.438 [2024-11-28 06:41:16.050778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.987 ms 00:16:05.438 [2024-11-28 06:41:16.050786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.438 [2024-11-28 06:41:16.050837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.438 [2024-11-28 06:41:16.050850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:05.438 [2024-11-28 06:41:16.050861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:05.438 [2024-11-28 06:41:16.050868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.438 [2024-11-28 06:41:16.051170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.438 [2024-11-28 06:41:16.051191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:05.438 [2024-11-28 06:41:16.051205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:16:05.438 [2024-11-28 06:41:16.051212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.438 [2024-11-28 06:41:16.051325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.438 [2024-11-28 06:41:16.051340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:05.438 [2024-11-28 06:41:16.051350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:16:05.438 [2024-11-28 06:41:16.051359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.438 [2024-11-28 06:41:16.056836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.438 [2024-11-28 06:41:16.056961] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:05.438 [2024-11-28 06:41:16.056979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.450 ms 00:16:05.438 [2024-11-28 06:41:16.056990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.438 [2024-11-28 06:41:16.059624] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:05.438 [2024-11-28 06:41:16.059655] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:05.438 [2024-11-28 06:41:16.059667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.438 [2024-11-28 06:41:16.059676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:05.438 [2024-11-28 06:41:16.059686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.587 ms 00:16:05.438 [2024-11-28 06:41:16.059693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.438 [2024-11-28 06:41:16.074323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.438 [2024-11-28 06:41:16.074352] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:05.438 [2024-11-28 06:41:16.074365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.578 ms 00:16:05.438 [2024-11-28 06:41:16.074376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.438 [2024-11-28 06:41:16.076328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.438 [2024-11-28 06:41:16.076443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:05.438 [2024-11-28 06:41:16.076463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.885 ms 00:16:05.438 [2024-11-28 06:41:16.076471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.438 [2024-11-28 06:41:16.078157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.438 [2024-11-28 06:41:16.078181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:05.438 [2024-11-28 06:41:16.078191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.650 ms 00:16:05.438 [2024-11-28 06:41:16.078198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.438 [2024-11-28 06:41:16.078392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.438 [2024-11-28 06:41:16.078408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:05.438 [2024-11-28 06:41:16.078422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:16:05.438 [2024-11-28 06:41:16.078430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.438 [2024-11-28 06:41:16.096210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.438 [2024-11-28 06:41:16.096245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:05.438 [2024-11-28 06:41:16.096261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.746 ms 00:16:05.438 [2024-11-28 06:41:16.096271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.438 [2024-11-28 06:41:16.103659] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:05.438 [2024-11-28 06:41:16.117058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.438 [2024-11-28 06:41:16.117094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:05.438 [2024-11-28 06:41:16.117105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.720 ms 00:16:05.438 [2024-11-28 06:41:16.117113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.438 [2024-11-28 06:41:16.117174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.438 [2024-11-28 06:41:16.117186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:05.438 [2024-11-28 06:41:16.117195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:05.438 [2024-11-28 06:41:16.117204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.438 [2024-11-28 06:41:16.117250] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.438 [2024-11-28 06:41:16.117260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:05.438 [2024-11-28 06:41:16.117269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:16:05.438 [2024-11-28 06:41:16.117278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.438 [2024-11-28 06:41:16.118493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.438 [2024-11-28 06:41:16.118526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:05.438 [2024-11-28 06:41:16.118535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.194 ms 00:16:05.438 [2024-11-28 06:41:16.118549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.438 [2024-11-28 06:41:16.118577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.438 [2024-11-28 06:41:16.118589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:05.438 [2024-11-28 06:41:16.118597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:05.438 [2024-11-28 06:41:16.118606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.438 [2024-11-28 06:41:16.118640] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:05.438 [2024-11-28 06:41:16.118655] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.438 [2024-11-28 06:41:16.118662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:05.438 [2024-11-28 06:41:16.118673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:05.438 [2024-11-28 06:41:16.118680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.438 [2024-11-28 06:41:16.122180] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.438 [2024-11-28 06:41:16.122213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:05.438 [2024-11-28 06:41:16.122225] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.474 ms 00:16:05.438 [2024-11-28 06:41:16.122234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.438 [2024-11-28 06:41:16.122304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.438 [2024-11-28 06:41:16.122316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:05.438 [2024-11-28 06:41:16.122326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:16:05.438 [2024-11-28 06:41:16.122335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.438 [2024-11-28 06:41:16.123200] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:05.438 [2024-11-28 06:41:16.124218] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 101.235 ms, result 0 00:16:05.438 [2024-11-28 06:41:16.126221] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:05.438 Some configs were skipped because the RPC state that can call them passed over. 00:16:05.438 06:41:16 -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:05.696 [2024-11-28 06:41:16.333930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.696 [2024-11-28 06:41:16.333975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:05.697 [2024-11-28 06:41:16.333992] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.232 ms 00:16:05.697 [2024-11-28 06:41:16.334002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.697 [2024-11-28 06:41:16.334035] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 4.339 ms, result 0 00:16:05.697 true 00:16:05.697 06:41:16 -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:05.957 [2024-11-28 06:41:16.517954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.957 [2024-11-28 06:41:16.517990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:05.957 [2024-11-28 06:41:16.518002] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.691 ms 00:16:05.957 [2024-11-28 06:41:16.518009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.957 [2024-11-28 06:41:16.518045] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 3.783 ms, result 0 00:16:05.957 true 00:16:05.957 06:41:16 -- ftl/trim.sh@81 -- # killprocess 83050 00:16:05.957 06:41:16 -- common/autotest_common.sh@936 -- # '[' -z 83050 ']' 00:16:05.957 06:41:16 -- common/autotest_common.sh@940 -- # kill -0 83050 00:16:05.957 06:41:16 -- common/autotest_common.sh@941 -- # uname 00:16:05.957 06:41:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:05.957 06:41:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83050 00:16:05.957 killing process with pid 83050 00:16:05.957 06:41:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:05.957 06:41:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:05.957 06:41:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83050' 00:16:05.957 06:41:16 -- common/autotest_common.sh@955 -- # kill 83050 00:16:05.957 06:41:16 -- common/autotest_common.sh@960 -- # wait 83050 00:16:05.957 [2024-11-28 06:41:16.648082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.957 [2024-11-28 06:41:16.648136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:05.957 [2024-11-28 06:41:16.648149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:05.957 [2024-11-28 06:41:16.648160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.957 [2024-11-28 06:41:16.648183] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:05.957 [2024-11-28 06:41:16.648613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.957 [2024-11-28 06:41:16.648637] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:05.957 [2024-11-28 06:41:16.648649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:16:05.957 [2024-11-28 06:41:16.648656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.957 [2024-11-28 06:41:16.648959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.957 [2024-11-28 06:41:16.648977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:05.957 [2024-11-28 06:41:16.648988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:16:05.957 [2024-11-28 06:41:16.648998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.957 [2024-11-28 06:41:16.653523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.957 [2024-11-28 06:41:16.653555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:05.957 [2024-11-28 06:41:16.653566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.502 ms 00:16:05.957 [2024-11-28 06:41:16.653574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.957 [2024-11-28 06:41:16.660648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.957 [2024-11-28 06:41:16.660678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:05.957 [2024-11-28 06:41:16.660697] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.039 ms 00:16:05.957 [2024-11-28 06:41:16.660719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.957 [2024-11-28 06:41:16.662825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.957 [2024-11-28 06:41:16.662857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:05.957 [2024-11-28 06:41:16.662868] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.033 ms 00:16:05.957 [2024-11-28 06:41:16.662875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.957 [2024-11-28 06:41:16.667070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.957 [2024-11-28 06:41:16.667102] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:05.957 [2024-11-28 06:41:16.667115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.156 ms 00:16:05.957 [2024-11-28 06:41:16.667123] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.957 [2024-11-28 06:41:16.667249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.957 [2024-11-28 06:41:16.667259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:05.957 [2024-11-28 06:41:16.667269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:16:05.957 [2024-11-28 06:41:16.667279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.957 [2024-11-28 06:41:16.669780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.957 [2024-11-28 06:41:16.669809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:05.957 [2024-11-28 06:41:16.669822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.475 ms 00:16:05.957 [2024-11-28 06:41:16.669829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.957 [2024-11-28 06:41:16.671857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.957 [2024-11-28 06:41:16.671990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:05.957 [2024-11-28 06:41:16.672008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.988 ms 00:16:05.957 [2024-11-28 06:41:16.672016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.957 [2024-11-28 06:41:16.673887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.957 [2024-11-28 06:41:16.673918] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:05.957 [2024-11-28 06:41:16.673928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.834 ms 00:16:05.957 [2024-11-28 06:41:16.673935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.957 [2024-11-28 06:41:16.675544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.957 [2024-11-28 06:41:16.675651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:05.957 [2024-11-28 06:41:16.675667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.546 ms 00:16:05.957 [2024-11-28 06:41:16.675675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.957 [2024-11-28 06:41:16.675717] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:05.958 [2024-11-28 06:41:16.675732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.675746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.675754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.675764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.675772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.675781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.675789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.675798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.675806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.675816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.675823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.675833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.675840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.675851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.675858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.675868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.675876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.675892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.675900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.675909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.675917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.675926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.675933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.675942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.675950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.675958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.675966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.675975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.675983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.675993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:05.958 [2024-11-28 06:41:16.676320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:05.959 [2024-11-28 06:41:16.676330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:05.959 [2024-11-28 06:41:16.676337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:05.959 [2024-11-28 06:41:16.676346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:05.959 [2024-11-28 06:41:16.676353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:05.959 [2024-11-28 06:41:16.676362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:05.959 [2024-11-28 06:41:16.676370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:05.959 [2024-11-28 06:41:16.676379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:05.959 [2024-11-28 06:41:16.676386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:05.959 [2024-11-28 06:41:16.676395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:05.959 [2024-11-28 06:41:16.676402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:05.959 [2024-11-28 06:41:16.676412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:05.959 [2024-11-28 06:41:16.676427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:05.959 [2024-11-28 06:41:16.676437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:05.959 [2024-11-28 06:41:16.676444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:05.959 [2024-11-28 06:41:16.676453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:05.959 [2024-11-28 06:41:16.676461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:05.959 [2024-11-28 06:41:16.676470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:05.959 [2024-11-28 06:41:16.676478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:05.959 [2024-11-28 06:41:16.676487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:05.959 [2024-11-28 06:41:16.676494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:05.959 [2024-11-28 06:41:16.676502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:05.959 [2024-11-28 06:41:16.676511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:05.959 [2024-11-28 06:41:16.676521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:05.959 [2024-11-28 06:41:16.676528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:05.959 [2024-11-28 06:41:16.676537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:05.959 [2024-11-28 06:41:16.676544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:05.959 [2024-11-28 06:41:16.676553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:05.959 [2024-11-28 06:41:16.676561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:05.959 [2024-11-28 06:41:16.676572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:05.959 [2024-11-28 06:41:16.676579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:05.959 [2024-11-28 06:41:16.676589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:05.959 [2024-11-28 06:41:16.676605] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:05.959 [2024-11-28 06:41:16.676614] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ea43bc93-5675-45fc-ae12-8201ac6e92f9 00:16:05.959 [2024-11-28 06:41:16.676621] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:05.959 [2024-11-28 06:41:16.676632] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:05.959 [2024-11-28 06:41:16.676639] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:05.959 [2024-11-28 06:41:16.676651] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:05.959 [2024-11-28 06:41:16.676658] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:05.959 [2024-11-28 06:41:16.676667] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:05.959 [2024-11-28 06:41:16.676674] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:05.959 [2024-11-28 06:41:16.676682] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:05.959 [2024-11-28 06:41:16.676689] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:05.959 [2024-11-28 06:41:16.676697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.959 [2024-11-28 06:41:16.676716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:05.959 [2024-11-28 06:41:16.676729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.982 ms 00:16:05.959 [2024-11-28 06:41:16.676737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.959 [2024-11-28 06:41:16.678116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.959 [2024-11-28 06:41:16.678137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:05.959 [2024-11-28 06:41:16.678147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.348 ms 00:16:05.959 [2024-11-28 06:41:16.678155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.959 [2024-11-28 06:41:16.678213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.959 [2024-11-28 06:41:16.678222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:05.959 [2024-11-28 06:41:16.678233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:05.959 [2024-11-28 06:41:16.678243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.959 [2024-11-28 06:41:16.683413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.959 [2024-11-28 06:41:16.683518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:05.959 [2024-11-28 06:41:16.683570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.959 [2024-11-28 06:41:16.683593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.959 [2024-11-28 06:41:16.683687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.959 [2024-11-28 06:41:16.683736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:05.959 [2024-11-28 06:41:16.683802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.959 [2024-11-28 06:41:16.683827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.959 [2024-11-28 06:41:16.683883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.959 [2024-11-28 06:41:16.683979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:05.959 [2024-11-28 06:41:16.684001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.959 [2024-11-28 06:41:16.684049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.959 [2024-11-28 06:41:16.684087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.959 [2024-11-28 06:41:16.684110] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:05.959 [2024-11-28 06:41:16.684162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.959 [2024-11-28 06:41:16.684184] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.959 [2024-11-28 06:41:16.693314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.959 [2024-11-28 06:41:16.693442] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:05.959 [2024-11-28 06:41:16.693495] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.959 [2024-11-28 06:41:16.693508] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.959 [2024-11-28 06:41:16.697248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.959 [2024-11-28 06:41:16.697350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:05.959 [2024-11-28 06:41:16.697409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.959 [2024-11-28 06:41:16.697432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.959 [2024-11-28 06:41:16.697487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.959 [2024-11-28 06:41:16.697509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:05.959 [2024-11-28 06:41:16.697532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.959 [2024-11-28 06:41:16.697552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.959 [2024-11-28 06:41:16.697607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.959 [2024-11-28 06:41:16.697631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:05.960 [2024-11-28 06:41:16.697654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.960 [2024-11-28 06:41:16.697723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.960 [2024-11-28 06:41:16.697816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.960 [2024-11-28 06:41:16.697841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:05.960 [2024-11-28 06:41:16.697863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.960 [2024-11-28 06:41:16.697881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.960 [2024-11-28 06:41:16.697931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.960 [2024-11-28 06:41:16.698026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:05.960 [2024-11-28 06:41:16.698052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.960 [2024-11-28 06:41:16.698072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.960 [2024-11-28 06:41:16.698121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.960 [2024-11-28 06:41:16.698143] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:05.960 [2024-11-28 06:41:16.698260] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.960 [2024-11-28 06:41:16.698291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.960 [2024-11-28 06:41:16.698358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.960 [2024-11-28 06:41:16.698383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:05.960 [2024-11-28 06:41:16.698408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.960 [2024-11-28 06:41:16.698427] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.960 [2024-11-28 06:41:16.698566] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 50.462 ms, result 0 00:16:06.218 06:41:16 -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:06.218 06:41:16 -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:06.218 [2024-11-28 06:41:16.924530] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:06.218 [2024-11-28 06:41:16.924654] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83091 ] 00:16:06.476 [2024-11-28 06:41:17.060225] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.476 [2024-11-28 06:41:17.091632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.476 [2024-11-28 06:41:17.176899] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:06.476 [2024-11-28 06:41:17.176972] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:06.736 [2024-11-28 06:41:17.325783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.736 [2024-11-28 06:41:17.325822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:06.736 [2024-11-28 06:41:17.325837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:06.736 [2024-11-28 06:41:17.325845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.736 [2024-11-28 06:41:17.328036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.736 [2024-11-28 06:41:17.328071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:06.736 [2024-11-28 06:41:17.328081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.176 ms 00:16:06.736 [2024-11-28 06:41:17.328089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.736 [2024-11-28 06:41:17.328160] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:06.736 [2024-11-28 06:41:17.328390] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:06.736 [2024-11-28 06:41:17.328405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.736 [2024-11-28 06:41:17.328429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:06.736 [2024-11-28 06:41:17.328438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:16:06.736 [2024-11-28 06:41:17.328446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.736 [2024-11-28 06:41:17.329576] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:06.736 [2024-11-28 06:41:17.331874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.736 [2024-11-28 06:41:17.331908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:06.736 [2024-11-28 06:41:17.331917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.299 ms 00:16:06.736 [2024-11-28 06:41:17.331924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.736 [2024-11-28 06:41:17.331986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.736 [2024-11-28 06:41:17.331996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:06.736 [2024-11-28 06:41:17.332009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:06.736 [2024-11-28 06:41:17.332019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.736 [2024-11-28 06:41:17.336883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.736 [2024-11-28 06:41:17.336913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:06.736 [2024-11-28 06:41:17.336922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.825 ms 00:16:06.736 [2024-11-28 06:41:17.336930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.736 [2024-11-28 06:41:17.337019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.736 [2024-11-28 06:41:17.337030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:06.736 [2024-11-28 06:41:17.337040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:16:06.736 [2024-11-28 06:41:17.337050] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.736 [2024-11-28 06:41:17.337074] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.736 [2024-11-28 06:41:17.337083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:06.736 [2024-11-28 06:41:17.337091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:06.736 [2024-11-28 06:41:17.337100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.736 [2024-11-28 06:41:17.337121] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:06.736 [2024-11-28 06:41:17.338429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.736 [2024-11-28 06:41:17.338457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:06.736 [2024-11-28 06:41:17.338469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.313 ms 00:16:06.736 [2024-11-28 06:41:17.338477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.736 [2024-11-28 06:41:17.338513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.736 [2024-11-28 06:41:17.338524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:06.736 [2024-11-28 06:41:17.338532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:06.736 [2024-11-28 06:41:17.338540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.736 [2024-11-28 06:41:17.338558] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:06.736 [2024-11-28 06:41:17.338574] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:06.736 [2024-11-28 06:41:17.338618] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:06.736 [2024-11-28 06:41:17.338634] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:06.736 [2024-11-28 06:41:17.338731] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:06.736 [2024-11-28 06:41:17.338745] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:06.736 [2024-11-28 06:41:17.338759] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:06.736 [2024-11-28 06:41:17.338772] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:06.736 [2024-11-28 06:41:17.338780] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:06.736 [2024-11-28 06:41:17.338791] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:06.736 [2024-11-28 06:41:17.338798] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:06.736 [2024-11-28 06:41:17.338813] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:06.736 [2024-11-28 06:41:17.338820] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:06.736 [2024-11-28 06:41:17.338827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.736 [2024-11-28 06:41:17.338858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:06.736 [2024-11-28 06:41:17.338866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:16:06.736 [2024-11-28 06:41:17.338876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.736 [2024-11-28 06:41:17.338940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.736 [2024-11-28 06:41:17.338949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:06.736 [2024-11-28 06:41:17.338960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:16:06.736 [2024-11-28 06:41:17.338969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.736 [2024-11-28 06:41:17.339043] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:06.736 [2024-11-28 06:41:17.339057] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:06.736 [2024-11-28 06:41:17.339065] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:06.736 [2024-11-28 06:41:17.339073] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:06.736 [2024-11-28 06:41:17.339085] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:06.736 [2024-11-28 06:41:17.339092] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:06.736 [2024-11-28 06:41:17.339101] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:06.736 [2024-11-28 06:41:17.339109] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:06.736 [2024-11-28 06:41:17.339117] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:06.736 [2024-11-28 06:41:17.339125] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:06.736 [2024-11-28 06:41:17.339132] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:06.737 [2024-11-28 06:41:17.339141] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:06.737 [2024-11-28 06:41:17.339148] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:06.737 [2024-11-28 06:41:17.339156] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:06.737 [2024-11-28 06:41:17.339163] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:06.737 [2024-11-28 06:41:17.339171] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:06.737 [2024-11-28 06:41:17.339180] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:06.737 [2024-11-28 06:41:17.339187] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:06.737 [2024-11-28 06:41:17.339194] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:06.737 [2024-11-28 06:41:17.339202] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:06.737 [2024-11-28 06:41:17.339209] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:06.737 [2024-11-28 06:41:17.339217] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:06.737 [2024-11-28 06:41:17.339225] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:06.737 [2024-11-28 06:41:17.339233] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:06.737 [2024-11-28 06:41:17.339241] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:06.737 [2024-11-28 06:41:17.339248] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:06.737 [2024-11-28 06:41:17.339258] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:06.737 [2024-11-28 06:41:17.339265] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:06.737 [2024-11-28 06:41:17.339272] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:06.737 [2024-11-28 06:41:17.339279] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:06.737 [2024-11-28 06:41:17.339287] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:06.737 [2024-11-28 06:41:17.339297] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:06.737 [2024-11-28 06:41:17.339306] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:06.737 [2024-11-28 06:41:17.339313] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:06.737 [2024-11-28 06:41:17.339320] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:06.737 [2024-11-28 06:41:17.339327] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:06.737 [2024-11-28 06:41:17.339334] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:06.737 [2024-11-28 06:41:17.339342] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:06.737 [2024-11-28 06:41:17.339349] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:06.737 [2024-11-28 06:41:17.339356] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:06.737 [2024-11-28 06:41:17.339363] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:06.737 [2024-11-28 06:41:17.339371] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:06.737 [2024-11-28 06:41:17.339379] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:06.737 [2024-11-28 06:41:17.339387] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:06.737 [2024-11-28 06:41:17.339394] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:06.737 [2024-11-28 06:41:17.339402] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:06.737 [2024-11-28 06:41:17.339409] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:06.737 [2024-11-28 06:41:17.339419] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:06.737 [2024-11-28 06:41:17.339426] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:06.737 [2024-11-28 06:41:17.339433] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:06.737 [2024-11-28 06:41:17.339442] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:06.737 [2024-11-28 06:41:17.339455] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:06.737 [2024-11-28 06:41:17.339468] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:06.737 [2024-11-28 06:41:17.339476] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:06.737 [2024-11-28 06:41:17.339484] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:06.737 [2024-11-28 06:41:17.339494] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:06.737 [2024-11-28 06:41:17.339502] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:06.737 [2024-11-28 06:41:17.339511] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:06.737 [2024-11-28 06:41:17.339519] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:06.737 [2024-11-28 06:41:17.339527] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:06.737 [2024-11-28 06:41:17.339535] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:06.737 [2024-11-28 06:41:17.339543] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:06.737 [2024-11-28 06:41:17.339552] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:06.737 [2024-11-28 06:41:17.339562] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:06.737 [2024-11-28 06:41:17.339571] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:06.737 [2024-11-28 06:41:17.339578] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:06.737 [2024-11-28 06:41:17.339587] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:06.737 [2024-11-28 06:41:17.339595] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:06.737 [2024-11-28 06:41:17.339602] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:06.737 [2024-11-28 06:41:17.339609] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:06.737 [2024-11-28 06:41:17.339617] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:06.737 [2024-11-28 06:41:17.339624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.737 [2024-11-28 06:41:17.339636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:06.737 [2024-11-28 06:41:17.339643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.625 ms 00:16:06.737 [2024-11-28 06:41:17.339650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.737 [2024-11-28 06:41:17.345675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.737 [2024-11-28 06:41:17.345731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:06.737 [2024-11-28 06:41:17.345742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.984 ms 00:16:06.737 [2024-11-28 06:41:17.345752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.737 [2024-11-28 06:41:17.345863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.737 [2024-11-28 06:41:17.345874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:06.737 [2024-11-28 06:41:17.345883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:16:06.737 [2024-11-28 06:41:17.345891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.737 [2024-11-28 06:41:17.362391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.737 [2024-11-28 06:41:17.362430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:06.737 [2024-11-28 06:41:17.362442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.479 ms 00:16:06.737 [2024-11-28 06:41:17.362455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.737 [2024-11-28 06:41:17.362523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.737 [2024-11-28 06:41:17.362534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:06.737 [2024-11-28 06:41:17.362543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:06.737 [2024-11-28 06:41:17.362555] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.737 [2024-11-28 06:41:17.362902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.737 [2024-11-28 06:41:17.362918] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:06.737 [2024-11-28 06:41:17.362927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:16:06.737 [2024-11-28 06:41:17.362934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.737 [2024-11-28 06:41:17.363054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.737 [2024-11-28 06:41:17.363064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:06.737 [2024-11-28 06:41:17.363072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:16:06.737 [2024-11-28 06:41:17.363080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.737 [2024-11-28 06:41:17.368668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.737 [2024-11-28 06:41:17.368837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:06.737 [2024-11-28 06:41:17.368855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.568 ms 00:16:06.737 [2024-11-28 06:41:17.368865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.737 [2024-11-28 06:41:17.371848] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:06.737 [2024-11-28 06:41:17.371992] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:06.737 [2024-11-28 06:41:17.372011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.737 [2024-11-28 06:41:17.372021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:06.737 [2024-11-28 06:41:17.372031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.051 ms 00:16:06.737 [2024-11-28 06:41:17.372040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.737 [2024-11-28 06:41:17.387830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.737 [2024-11-28 06:41:17.387872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:06.737 [2024-11-28 06:41:17.387885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.510 ms 00:16:06.738 [2024-11-28 06:41:17.387893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.738 [2024-11-28 06:41:17.389980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.738 [2024-11-28 06:41:17.390014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:06.738 [2024-11-28 06:41:17.390023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.002 ms 00:16:06.738 [2024-11-28 06:41:17.390030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.738 [2024-11-28 06:41:17.392110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.738 [2024-11-28 06:41:17.392152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:06.738 [2024-11-28 06:41:17.392162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.039 ms 00:16:06.738 [2024-11-28 06:41:17.392170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.738 [2024-11-28 06:41:17.392396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.738 [2024-11-28 06:41:17.392428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:06.738 [2024-11-28 06:41:17.392438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:16:06.738 [2024-11-28 06:41:17.392446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.738 [2024-11-28 06:41:17.410472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.738 [2024-11-28 06:41:17.410628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:06.738 [2024-11-28 06:41:17.410647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.005 ms 00:16:06.738 [2024-11-28 06:41:17.410655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.738 [2024-11-28 06:41:17.418026] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:06.738 [2024-11-28 06:41:17.432038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.738 [2024-11-28 06:41:17.432073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:06.738 [2024-11-28 06:41:17.432085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.290 ms 00:16:06.738 [2024-11-28 06:41:17.432093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.738 [2024-11-28 06:41:17.432156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.738 [2024-11-28 06:41:17.432169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:06.738 [2024-11-28 06:41:17.432178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:06.738 [2024-11-28 06:41:17.432185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.738 [2024-11-28 06:41:17.432230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.738 [2024-11-28 06:41:17.432238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:06.738 [2024-11-28 06:41:17.432246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:06.738 [2024-11-28 06:41:17.432254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.738 [2024-11-28 06:41:17.433503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.738 [2024-11-28 06:41:17.433537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:06.738 [2024-11-28 06:41:17.433549] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.227 ms 00:16:06.738 [2024-11-28 06:41:17.433556] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.738 [2024-11-28 06:41:17.433591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.738 [2024-11-28 06:41:17.433600] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:06.738 [2024-11-28 06:41:17.433612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:06.738 [2024-11-28 06:41:17.433622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.738 [2024-11-28 06:41:17.433654] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:06.738 [2024-11-28 06:41:17.433666] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.738 [2024-11-28 06:41:17.433673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:06.738 [2024-11-28 06:41:17.433681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:06.738 [2024-11-28 06:41:17.433693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.738 [2024-11-28 06:41:17.437683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.738 [2024-11-28 06:41:17.437734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:06.738 [2024-11-28 06:41:17.437744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.950 ms 00:16:06.738 [2024-11-28 06:41:17.437758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.738 [2024-11-28 06:41:17.437829] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.738 [2024-11-28 06:41:17.437840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:06.738 [2024-11-28 06:41:17.437848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:16:06.738 [2024-11-28 06:41:17.437855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.738 [2024-11-28 06:41:17.438599] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:06.738 [2024-11-28 06:41:17.439593] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 112.546 ms, result 0 00:16:06.738 [2024-11-28 06:41:17.440461] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:06.738 [2024-11-28 06:41:17.449155] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:08.113  [2024-11-28T06:41:19.451Z] Copying: 27/256 [MB] (27 MBps) [2024-11-28T06:41:20.824Z] Copying: 48/256 [MB] (21 MBps) [2024-11-28T06:41:21.758Z] Copying: 66/256 [MB] (18 MBps) [2024-11-28T06:41:22.692Z] Copying: 91/256 [MB] (24 MBps) [2024-11-28T06:41:23.627Z] Copying: 105/256 [MB] (14 MBps) [2024-11-28T06:41:24.563Z] Copying: 124/256 [MB] (18 MBps) [2024-11-28T06:41:25.497Z] Copying: 142/256 [MB] (18 MBps) [2024-11-28T06:41:26.872Z] Copying: 165/256 [MB] (22 MBps) [2024-11-28T06:41:27.807Z] Copying: 181/256 [MB] (15 MBps) [2024-11-28T06:41:28.739Z] Copying: 196/256 [MB] (15 MBps) [2024-11-28T06:41:29.675Z] Copying: 215/256 [MB] (19 MBps) [2024-11-28T06:41:30.610Z] Copying: 236/256 [MB] (20 MBps) [2024-11-28T06:41:30.610Z] Copying: 254/256 [MB] (18 MBps) [2024-11-28T06:41:30.610Z] Copying: 256/256 [MB] (average 19 MBps)[2024-11-28 06:41:30.518408] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:19.840 [2024-11-28 06:41:30.519504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.840 [2024-11-28 06:41:30.519536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:19.840 [2024-11-28 06:41:30.519554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:19.840 [2024-11-28 06:41:30.519563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.840 [2024-11-28 06:41:30.519583] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:19.840 [2024-11-28 06:41:30.520000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.840 [2024-11-28 06:41:30.520015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:19.840 [2024-11-28 06:41:30.520030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:16:19.840 [2024-11-28 06:41:30.520038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.840 [2024-11-28 06:41:30.520294] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.840 [2024-11-28 06:41:30.520308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:19.840 [2024-11-28 06:41:30.520320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:16:19.840 [2024-11-28 06:41:30.520328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.840 [2024-11-28 06:41:30.524036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.840 [2024-11-28 06:41:30.524056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:19.840 [2024-11-28 06:41:30.524065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.686 ms 00:16:19.840 [2024-11-28 06:41:30.524076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.840 [2024-11-28 06:41:30.531307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.840 [2024-11-28 06:41:30.531434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:19.840 [2024-11-28 06:41:30.531449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.205 ms 00:16:19.840 [2024-11-28 06:41:30.531457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.840 [2024-11-28 06:41:30.533859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.840 [2024-11-28 06:41:30.533889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:19.840 [2024-11-28 06:41:30.533898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.341 ms 00:16:19.840 [2024-11-28 06:41:30.533904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.840 [2024-11-28 06:41:30.537912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.840 [2024-11-28 06:41:30.538026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:19.840 [2024-11-28 06:41:30.538041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.977 ms 00:16:19.841 [2024-11-28 06:41:30.538048] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.841 [2024-11-28 06:41:30.538164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.841 [2024-11-28 06:41:30.538174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:19.841 [2024-11-28 06:41:30.538182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:16:19.841 [2024-11-28 06:41:30.538189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.841 [2024-11-28 06:41:30.540690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.841 [2024-11-28 06:41:30.540732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:19.841 [2024-11-28 06:41:30.540741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.484 ms 00:16:19.841 [2024-11-28 06:41:30.540747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.841 [2024-11-28 06:41:30.543077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.841 [2024-11-28 06:41:30.543106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:19.841 [2024-11-28 06:41:30.543114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.300 ms 00:16:19.841 [2024-11-28 06:41:30.543121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.841 [2024-11-28 06:41:30.544936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.841 [2024-11-28 06:41:30.545039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:19.841 [2024-11-28 06:41:30.545052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.786 ms 00:16:19.841 [2024-11-28 06:41:30.545058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.841 [2024-11-28 06:41:30.546733] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.841 [2024-11-28 06:41:30.546761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:19.841 [2024-11-28 06:41:30.546769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.620 ms 00:16:19.841 [2024-11-28 06:41:30.546775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.841 [2024-11-28 06:41:30.546803] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:19.841 [2024-11-28 06:41:30.546821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.546830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.546838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.546845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.546852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.546859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.546866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.546874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.546881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.546888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.546896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.546903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.546910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.546917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.546924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.546931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.546938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.546945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.546952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.546960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.546967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.546974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.546981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.546988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.546995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.547002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.547010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.547018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.547025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.547032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.547039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.547048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.547055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.547062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.547070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.547078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.547085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.547092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.547100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.547113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.547121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.547128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.547135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.547142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.547150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.547157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.547164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.547171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.547178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.547186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:19.841 [2024-11-28 06:41:30.547193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:19.842 [2024-11-28 06:41:30.547564] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:19.842 [2024-11-28 06:41:30.547572] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ea43bc93-5675-45fc-ae12-8201ac6e92f9 00:16:19.842 [2024-11-28 06:41:30.547580] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:19.842 [2024-11-28 06:41:30.547586] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:19.842 [2024-11-28 06:41:30.547593] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:19.842 [2024-11-28 06:41:30.547601] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:19.842 [2024-11-28 06:41:30.547608] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:19.842 [2024-11-28 06:41:30.547615] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:19.842 [2024-11-28 06:41:30.547622] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:19.842 [2024-11-28 06:41:30.547628] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:19.842 [2024-11-28 06:41:30.547634] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:19.842 [2024-11-28 06:41:30.547641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.842 [2024-11-28 06:41:30.547648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:19.842 [2024-11-28 06:41:30.547658] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.839 ms 00:16:19.842 [2024-11-28 06:41:30.547665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.842 [2024-11-28 06:41:30.549253] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.842 [2024-11-28 06:41:30.549295] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:19.842 [2024-11-28 06:41:30.549315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.564 ms 00:16:19.842 [2024-11-28 06:41:30.549334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.842 [2024-11-28 06:41:30.549398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.842 [2024-11-28 06:41:30.549455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:19.842 [2024-11-28 06:41:30.549475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:16:19.842 [2024-11-28 06:41:30.549498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.842 [2024-11-28 06:41:30.554378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.842 [2024-11-28 06:41:30.554482] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:19.842 [2024-11-28 06:41:30.554528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.843 [2024-11-28 06:41:30.554549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.843 [2024-11-28 06:41:30.554636] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.843 [2024-11-28 06:41:30.554663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:19.843 [2024-11-28 06:41:30.554682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.843 [2024-11-28 06:41:30.554701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.843 [2024-11-28 06:41:30.554805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.843 [2024-11-28 06:41:30.554831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:19.843 [2024-11-28 06:41:30.554851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.843 [2024-11-28 06:41:30.554869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.843 [2024-11-28 06:41:30.554922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.843 [2024-11-28 06:41:30.554945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:19.843 [2024-11-28 06:41:30.554968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.843 [2024-11-28 06:41:30.554986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.843 [2024-11-28 06:41:30.563096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.843 [2024-11-28 06:41:30.563224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:19.843 [2024-11-28 06:41:30.563273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.843 [2024-11-28 06:41:30.563297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.843 [2024-11-28 06:41:30.566919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.843 [2024-11-28 06:41:30.567039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:19.843 [2024-11-28 06:41:30.567083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.843 [2024-11-28 06:41:30.567105] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.843 [2024-11-28 06:41:30.567158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.843 [2024-11-28 06:41:30.567179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:19.843 [2024-11-28 06:41:30.567198] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.843 [2024-11-28 06:41:30.567216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.843 [2024-11-28 06:41:30.567254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.843 [2024-11-28 06:41:30.567273] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:19.843 [2024-11-28 06:41:30.567326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.843 [2024-11-28 06:41:30.567353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.843 [2024-11-28 06:41:30.567440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.843 [2024-11-28 06:41:30.567499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:19.843 [2024-11-28 06:41:30.567523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.843 [2024-11-28 06:41:30.567542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.843 [2024-11-28 06:41:30.567607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.843 [2024-11-28 06:41:30.567638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:19.843 [2024-11-28 06:41:30.567659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.843 [2024-11-28 06:41:30.567677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.843 [2024-11-28 06:41:30.567741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.843 [2024-11-28 06:41:30.567809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:19.843 [2024-11-28 06:41:30.567833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.843 [2024-11-28 06:41:30.567852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.843 [2024-11-28 06:41:30.567913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.843 [2024-11-28 06:41:30.567967] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:19.843 [2024-11-28 06:41:30.567989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.843 [2024-11-28 06:41:30.568011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.843 [2024-11-28 06:41:30.568155] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 48.638 ms, result 0 00:16:20.102 00:16:20.102 00:16:20.102 06:41:30 -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:16:20.102 06:41:30 -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:20.668 06:41:31 -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:20.668 [2024-11-28 06:41:31.348912] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:20.668 [2024-11-28 06:41:31.349153] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83246 ] 00:16:20.925 [2024-11-28 06:41:31.485492] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.925 [2024-11-28 06:41:31.515826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.925 [2024-11-28 06:41:31.599166] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:20.925 [2024-11-28 06:41:31.599238] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:21.189 [2024-11-28 06:41:31.748684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.189 [2024-11-28 06:41:31.748861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:21.189 [2024-11-28 06:41:31.748884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:21.189 [2024-11-28 06:41:31.748892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.189 [2024-11-28 06:41:31.751066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.189 [2024-11-28 06:41:31.751098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:21.189 [2024-11-28 06:41:31.751108] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.152 ms 00:16:21.189 [2024-11-28 06:41:31.751115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.189 [2024-11-28 06:41:31.751188] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:21.189 [2024-11-28 06:41:31.751409] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:21.189 [2024-11-28 06:41:31.751423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.189 [2024-11-28 06:41:31.751431] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:21.189 [2024-11-28 06:41:31.751439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:16:21.189 [2024-11-28 06:41:31.751450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.189 [2024-11-28 06:41:31.752535] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:21.189 [2024-11-28 06:41:31.755270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.190 [2024-11-28 06:41:31.755301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:21.190 [2024-11-28 06:41:31.755311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.737 ms 00:16:21.190 [2024-11-28 06:41:31.755318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.190 [2024-11-28 06:41:31.755375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.190 [2024-11-28 06:41:31.755385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:21.190 [2024-11-28 06:41:31.755394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:21.190 [2024-11-28 06:41:31.755403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.190 [2024-11-28 06:41:31.760203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.190 [2024-11-28 06:41:31.760305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:21.190 [2024-11-28 06:41:31.760356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.757 ms 00:16:21.190 [2024-11-28 06:41:31.760378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.190 [2024-11-28 06:41:31.760492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.190 [2024-11-28 06:41:31.760520] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:21.190 [2024-11-28 06:41:31.760543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:21.190 [2024-11-28 06:41:31.760561] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.190 [2024-11-28 06:41:31.760599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.190 [2024-11-28 06:41:31.760620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:21.190 [2024-11-28 06:41:31.760693] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:21.190 [2024-11-28 06:41:31.760742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.190 [2024-11-28 06:41:31.760772] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:21.190 [2024-11-28 06:41:31.762049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.190 [2024-11-28 06:41:31.762070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:21.190 [2024-11-28 06:41:31.762082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.285 ms 00:16:21.190 [2024-11-28 06:41:31.762093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.190 [2024-11-28 06:41:31.762128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.190 [2024-11-28 06:41:31.762139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:21.190 [2024-11-28 06:41:31.762147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:21.190 [2024-11-28 06:41:31.762155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.190 [2024-11-28 06:41:31.762173] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:21.190 [2024-11-28 06:41:31.762189] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:21.190 [2024-11-28 06:41:31.762223] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:21.190 [2024-11-28 06:41:31.762238] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:21.190 [2024-11-28 06:41:31.762310] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:21.190 [2024-11-28 06:41:31.762326] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:21.190 [2024-11-28 06:41:31.762336] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:21.190 [2024-11-28 06:41:31.762349] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:21.190 [2024-11-28 06:41:31.762357] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:21.190 [2024-11-28 06:41:31.762369] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:21.190 [2024-11-28 06:41:31.762375] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:21.190 [2024-11-28 06:41:31.762384] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:21.190 [2024-11-28 06:41:31.762391] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:21.190 [2024-11-28 06:41:31.762399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.190 [2024-11-28 06:41:31.762406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:21.190 [2024-11-28 06:41:31.762413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:16:21.190 [2024-11-28 06:41:31.762420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.190 [2024-11-28 06:41:31.762483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.190 [2024-11-28 06:41:31.762491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:21.190 [2024-11-28 06:41:31.762500] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:16:21.190 [2024-11-28 06:41:31.762509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.190 [2024-11-28 06:41:31.762588] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:21.190 [2024-11-28 06:41:31.762603] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:21.190 [2024-11-28 06:41:31.762610] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:21.190 [2024-11-28 06:41:31.762617] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:21.190 [2024-11-28 06:41:31.762625] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:21.190 [2024-11-28 06:41:31.762632] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:21.190 [2024-11-28 06:41:31.762639] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:21.190 [2024-11-28 06:41:31.762645] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:21.190 [2024-11-28 06:41:31.762652] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:21.190 [2024-11-28 06:41:31.762659] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:21.190 [2024-11-28 06:41:31.762666] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:21.190 [2024-11-28 06:41:31.762672] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:21.190 [2024-11-28 06:41:31.762678] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:21.190 [2024-11-28 06:41:31.762684] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:21.190 [2024-11-28 06:41:31.762691] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:21.190 [2024-11-28 06:41:31.762700] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:21.190 [2024-11-28 06:41:31.762719] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:21.190 [2024-11-28 06:41:31.762729] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:21.190 [2024-11-28 06:41:31.762737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:21.190 [2024-11-28 06:41:31.762744] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:21.190 [2024-11-28 06:41:31.762751] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:21.190 [2024-11-28 06:41:31.762759] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:21.190 [2024-11-28 06:41:31.762767] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:21.190 [2024-11-28 06:41:31.762774] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:21.190 [2024-11-28 06:41:31.762781] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:21.190 [2024-11-28 06:41:31.762789] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:21.190 [2024-11-28 06:41:31.762796] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:21.190 [2024-11-28 06:41:31.762803] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:21.190 [2024-11-28 06:41:31.762810] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:21.190 [2024-11-28 06:41:31.762818] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:21.190 [2024-11-28 06:41:31.762825] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:21.190 [2024-11-28 06:41:31.762836] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:21.190 [2024-11-28 06:41:31.762844] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:21.190 [2024-11-28 06:41:31.762851] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:21.190 [2024-11-28 06:41:31.762859] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:21.190 [2024-11-28 06:41:31.762866] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:21.190 [2024-11-28 06:41:31.762872] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:21.190 [2024-11-28 06:41:31.762880] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:21.190 [2024-11-28 06:41:31.762888] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:21.190 [2024-11-28 06:41:31.762895] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:21.190 [2024-11-28 06:41:31.762902] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:21.190 [2024-11-28 06:41:31.762910] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:21.190 [2024-11-28 06:41:31.762917] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:21.190 [2024-11-28 06:41:31.762925] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:21.190 [2024-11-28 06:41:31.762934] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:21.190 [2024-11-28 06:41:31.762942] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:21.190 [2024-11-28 06:41:31.762949] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:21.190 [2024-11-28 06:41:31.762959] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:21.190 [2024-11-28 06:41:31.762966] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:21.190 [2024-11-28 06:41:31.762975] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:21.190 [2024-11-28 06:41:31.762984] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:21.190 [2024-11-28 06:41:31.762997] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:21.190 [2024-11-28 06:41:31.763008] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:21.190 [2024-11-28 06:41:31.763017] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:21.191 [2024-11-28 06:41:31.763025] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:21.191 [2024-11-28 06:41:31.763034] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:21.191 [2024-11-28 06:41:31.763042] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:21.191 [2024-11-28 06:41:31.763050] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:21.191 [2024-11-28 06:41:31.763057] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:21.191 [2024-11-28 06:41:31.763065] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:21.191 [2024-11-28 06:41:31.763073] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:21.191 [2024-11-28 06:41:31.763082] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:21.191 [2024-11-28 06:41:31.763089] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:21.191 [2024-11-28 06:41:31.763099] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:21.191 [2024-11-28 06:41:31.763108] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:21.191 [2024-11-28 06:41:31.763115] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:21.191 [2024-11-28 06:41:31.763128] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:21.191 [2024-11-28 06:41:31.763138] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:21.191 [2024-11-28 06:41:31.763146] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:21.191 [2024-11-28 06:41:31.763154] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:21.191 [2024-11-28 06:41:31.763162] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:21.191 [2024-11-28 06:41:31.763171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.191 [2024-11-28 06:41:31.763184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:21.191 [2024-11-28 06:41:31.763192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.625 ms 00:16:21.191 [2024-11-28 06:41:31.763199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.191 [2024-11-28 06:41:31.769202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.191 [2024-11-28 06:41:31.769308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:21.191 [2024-11-28 06:41:31.769364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.961 ms 00:16:21.191 [2024-11-28 06:41:31.769387] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.191 [2024-11-28 06:41:31.769506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.191 [2024-11-28 06:41:31.769533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:21.191 [2024-11-28 06:41:31.769554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:16:21.191 [2024-11-28 06:41:31.769611] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.191 [2024-11-28 06:41:31.786265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.191 [2024-11-28 06:41:31.786391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:21.191 [2024-11-28 06:41:31.786451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.616 ms 00:16:21.191 [2024-11-28 06:41:31.786476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.191 [2024-11-28 06:41:31.786556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.191 [2024-11-28 06:41:31.786583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:21.191 [2024-11-28 06:41:31.786604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:21.191 [2024-11-28 06:41:31.786628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.191 [2024-11-28 06:41:31.786958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.191 [2024-11-28 06:41:31.787014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:21.191 [2024-11-28 06:41:31.787035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:16:21.191 [2024-11-28 06:41:31.787058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.191 [2024-11-28 06:41:31.787188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.191 [2024-11-28 06:41:31.787211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:21.191 [2024-11-28 06:41:31.787272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:16:21.191 [2024-11-28 06:41:31.787331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.191 [2024-11-28 06:41:31.792592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.191 [2024-11-28 06:41:31.792714] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:21.191 [2024-11-28 06:41:31.792768] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.222 ms 00:16:21.191 [2024-11-28 06:41:31.792792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.191 [2024-11-28 06:41:31.795404] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:21.191 [2024-11-28 06:41:31.795516] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:21.191 [2024-11-28 06:41:31.795576] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.191 [2024-11-28 06:41:31.795598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:21.191 [2024-11-28 06:41:31.795618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.685 ms 00:16:21.191 [2024-11-28 06:41:31.795636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.191 [2024-11-28 06:41:31.810340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.191 [2024-11-28 06:41:31.810449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:21.191 [2024-11-28 06:41:31.810496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.654 ms 00:16:21.191 [2024-11-28 06:41:31.810522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.191 [2024-11-28 06:41:31.814838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.191 [2024-11-28 06:41:31.815117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:21.191 [2024-11-28 06:41:31.815160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.780 ms 00:16:21.191 [2024-11-28 06:41:31.815181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.191 [2024-11-28 06:41:31.818457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.191 [2024-11-28 06:41:31.818524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:21.191 [2024-11-28 06:41:31.818547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.158 ms 00:16:21.191 [2024-11-28 06:41:31.818565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.191 [2024-11-28 06:41:31.819130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.191 [2024-11-28 06:41:31.819184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:21.191 [2024-11-28 06:41:31.819211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:16:21.191 [2024-11-28 06:41:31.819233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.191 [2024-11-28 06:41:31.839304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.191 [2024-11-28 06:41:31.839339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:21.191 [2024-11-28 06:41:31.839350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.994 ms 00:16:21.191 [2024-11-28 06:41:31.839361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.191 [2024-11-28 06:41:31.846681] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:21.191 [2024-11-28 06:41:31.860469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.191 [2024-11-28 06:41:31.860501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:21.191 [2024-11-28 06:41:31.860513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.038 ms 00:16:21.191 [2024-11-28 06:41:31.860526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.191 [2024-11-28 06:41:31.860589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.191 [2024-11-28 06:41:31.860599] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:21.191 [2024-11-28 06:41:31.860608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:21.191 [2024-11-28 06:41:31.860615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.191 [2024-11-28 06:41:31.860659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.191 [2024-11-28 06:41:31.860669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:21.191 [2024-11-28 06:41:31.860676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:16:21.191 [2024-11-28 06:41:31.860684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.191 [2024-11-28 06:41:31.861851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.191 [2024-11-28 06:41:31.861978] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:21.191 [2024-11-28 06:41:31.861992] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.124 ms 00:16:21.191 [2024-11-28 06:41:31.862000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.191 [2024-11-28 06:41:31.862037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.191 [2024-11-28 06:41:31.862050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:21.191 [2024-11-28 06:41:31.862060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:21.191 [2024-11-28 06:41:31.862067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.191 [2024-11-28 06:41:31.862096] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:21.191 [2024-11-28 06:41:31.862106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.191 [2024-11-28 06:41:31.862114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:21.191 [2024-11-28 06:41:31.862123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:21.191 [2024-11-28 06:41:31.862130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.191 [2024-11-28 06:41:31.866145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.191 [2024-11-28 06:41:31.866175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:21.191 [2024-11-28 06:41:31.866184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.993 ms 00:16:21.191 [2024-11-28 06:41:31.866192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.192 [2024-11-28 06:41:31.866260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.192 [2024-11-28 06:41:31.866269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:21.192 [2024-11-28 06:41:31.866277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:16:21.192 [2024-11-28 06:41:31.866293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.192 [2024-11-28 06:41:31.867029] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:21.192 [2024-11-28 06:41:31.867980] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 118.097 ms, result 0 00:16:21.192 [2024-11-28 06:41:31.869368] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:21.192 [2024-11-28 06:41:31.877496] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:21.507  [2024-11-28T06:41:32.277Z] Copying: 4096/4096 [kB] (average 14 MBps)[2024-11-28 06:41:32.151032] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:21.507 [2024-11-28 06:41:32.151687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.507 [2024-11-28 06:41:32.151732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:21.507 [2024-11-28 06:41:32.151748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:21.508 [2024-11-28 06:41:32.151756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.508 [2024-11-28 06:41:32.151776] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:21.508 [2024-11-28 06:41:32.152174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.508 [2024-11-28 06:41:32.152232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:21.508 [2024-11-28 06:41:32.152244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:16:21.508 [2024-11-28 06:41:32.152251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.508 [2024-11-28 06:41:32.155284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.508 [2024-11-28 06:41:32.155409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:21.508 [2024-11-28 06:41:32.155427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.009 ms 00:16:21.508 [2024-11-28 06:41:32.155440] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.508 [2024-11-28 06:41:32.159621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.508 [2024-11-28 06:41:32.159647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:21.508 [2024-11-28 06:41:32.159656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.161 ms 00:16:21.508 [2024-11-28 06:41:32.159663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.508 [2024-11-28 06:41:32.167186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.508 [2024-11-28 06:41:32.167215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:21.508 [2024-11-28 06:41:32.167225] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.498 ms 00:16:21.508 [2024-11-28 06:41:32.167233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.508 [2024-11-28 06:41:32.168695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.508 [2024-11-28 06:41:32.168736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:21.508 [2024-11-28 06:41:32.168744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.396 ms 00:16:21.508 [2024-11-28 06:41:32.168752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.508 [2024-11-28 06:41:32.172284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.508 [2024-11-28 06:41:32.172315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:21.508 [2024-11-28 06:41:32.172330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.502 ms 00:16:21.508 [2024-11-28 06:41:32.172337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.508 [2024-11-28 06:41:32.172470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.508 [2024-11-28 06:41:32.172480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:21.508 [2024-11-28 06:41:32.172489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:16:21.508 [2024-11-28 06:41:32.172496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.508 [2024-11-28 06:41:32.175489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.508 [2024-11-28 06:41:32.175531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:21.508 [2024-11-28 06:41:32.175541] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.971 ms 00:16:21.508 [2024-11-28 06:41:32.175548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.508 [2024-11-28 06:41:32.177203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.508 [2024-11-28 06:41:32.177232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:21.508 [2024-11-28 06:41:32.177241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.621 ms 00:16:21.508 [2024-11-28 06:41:32.177248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.508 [2024-11-28 06:41:32.178421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.508 [2024-11-28 06:41:32.178551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:21.508 [2024-11-28 06:41:32.178565] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.128 ms 00:16:21.508 [2024-11-28 06:41:32.178573] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.508 [2024-11-28 06:41:32.180264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.508 [2024-11-28 06:41:32.180292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:21.508 [2024-11-28 06:41:32.180299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.634 ms 00:16:21.508 [2024-11-28 06:41:32.180306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.508 [2024-11-28 06:41:32.180595] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:21.508 [2024-11-28 06:41:32.180636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.180998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.181006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.181013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.181020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.181027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:21.508 [2024-11-28 06:41:32.181034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:21.509 [2024-11-28 06:41:32.181421] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:21.509 [2024-11-28 06:41:32.181429] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ea43bc93-5675-45fc-ae12-8201ac6e92f9 00:16:21.509 [2024-11-28 06:41:32.181437] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:21.509 [2024-11-28 06:41:32.181444] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:21.509 [2024-11-28 06:41:32.181451] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:21.509 [2024-11-28 06:41:32.181458] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:21.509 [2024-11-28 06:41:32.181465] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:21.509 [2024-11-28 06:41:32.181473] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:21.509 [2024-11-28 06:41:32.181480] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:21.509 [2024-11-28 06:41:32.181487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:21.509 [2024-11-28 06:41:32.181493] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:21.509 [2024-11-28 06:41:32.181501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.509 [2024-11-28 06:41:32.181511] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:21.509 [2024-11-28 06:41:32.181519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.909 ms 00:16:21.509 [2024-11-28 06:41:32.181527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.509 [2024-11-28 06:41:32.182847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.509 [2024-11-28 06:41:32.182868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:21.509 [2024-11-28 06:41:32.182876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.302 ms 00:16:21.509 [2024-11-28 06:41:32.182884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.509 [2024-11-28 06:41:32.182939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.509 [2024-11-28 06:41:32.182947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:21.509 [2024-11-28 06:41:32.182955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:16:21.509 [2024-11-28 06:41:32.182961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.509 [2024-11-28 06:41:32.187897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.509 [2024-11-28 06:41:32.187926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:21.509 [2024-11-28 06:41:32.187940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.509 [2024-11-28 06:41:32.187947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.509 [2024-11-28 06:41:32.188029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.509 [2024-11-28 06:41:32.188040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:21.509 [2024-11-28 06:41:32.188048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.509 [2024-11-28 06:41:32.188055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.509 [2024-11-28 06:41:32.188090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.509 [2024-11-28 06:41:32.188103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:21.509 [2024-11-28 06:41:32.188111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.509 [2024-11-28 06:41:32.188118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.509 [2024-11-28 06:41:32.188134] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.509 [2024-11-28 06:41:32.188144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:21.509 [2024-11-28 06:41:32.188152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.509 [2024-11-28 06:41:32.188159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.509 [2024-11-28 06:41:32.196119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.509 [2024-11-28 06:41:32.196152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:21.509 [2024-11-28 06:41:32.196161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.509 [2024-11-28 06:41:32.196169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.509 [2024-11-28 06:41:32.199738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.509 [2024-11-28 06:41:32.199765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:21.509 [2024-11-28 06:41:32.199780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.509 [2024-11-28 06:41:32.199787] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.509 [2024-11-28 06:41:32.199826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.509 [2024-11-28 06:41:32.199837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:21.510 [2024-11-28 06:41:32.199844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.510 [2024-11-28 06:41:32.199852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.510 [2024-11-28 06:41:32.199881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.510 [2024-11-28 06:41:32.199890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:21.510 [2024-11-28 06:41:32.199901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.510 [2024-11-28 06:41:32.199908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.510 [2024-11-28 06:41:32.199971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.510 [2024-11-28 06:41:32.199980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:21.510 [2024-11-28 06:41:32.199988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.510 [2024-11-28 06:41:32.199996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.510 [2024-11-28 06:41:32.200025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.510 [2024-11-28 06:41:32.200034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:21.510 [2024-11-28 06:41:32.200042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.510 [2024-11-28 06:41:32.200052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.510 [2024-11-28 06:41:32.200091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.510 [2024-11-28 06:41:32.200101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:21.510 [2024-11-28 06:41:32.200108] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.510 [2024-11-28 06:41:32.200119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.510 [2024-11-28 06:41:32.200168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.510 [2024-11-28 06:41:32.200178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:21.510 [2024-11-28 06:41:32.200188] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.510 [2024-11-28 06:41:32.200196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.510 [2024-11-28 06:41:32.200335] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 48.623 ms, result 0 00:16:21.769 00:16:21.769 00:16:21.769 06:41:32 -- ftl/trim.sh@93 -- # svcpid=83260 00:16:21.769 06:41:32 -- ftl/trim.sh@94 -- # waitforlisten 83260 00:16:21.769 06:41:32 -- common/autotest_common.sh@829 -- # '[' -z 83260 ']' 00:16:21.769 06:41:32 -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:21.769 06:41:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.769 06:41:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:21.769 06:41:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.769 06:41:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:21.769 06:41:32 -- common/autotest_common.sh@10 -- # set +x 00:16:21.769 [2024-11-28 06:41:32.461317] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:21.769 [2024-11-28 06:41:32.461431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83260 ] 00:16:22.028 [2024-11-28 06:41:32.597001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.028 [2024-11-28 06:41:32.627500] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:22.028 [2024-11-28 06:41:32.627679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.594 06:41:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:22.594 06:41:33 -- common/autotest_common.sh@862 -- # return 0 00:16:22.594 06:41:33 -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:22.852 [2024-11-28 06:41:33.460827] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:22.852 [2024-11-28 06:41:33.460880] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:22.852 [2024-11-28 06:41:33.603149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.852 [2024-11-28 06:41:33.603551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:22.852 [2024-11-28 06:41:33.603613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:16:22.852 [2024-11-28 06:41:33.603648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.852 [2024-11-28 06:41:33.609207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.852 [2024-11-28 06:41:33.609243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:22.852 [2024-11-28 06:41:33.609255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.450 ms 00:16:22.852 [2024-11-28 06:41:33.609262] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.852 [2024-11-28 06:41:33.609360] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:22.852 [2024-11-28 06:41:33.609586] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:22.852 [2024-11-28 06:41:33.609603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.852 [2024-11-28 06:41:33.609614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:22.852 [2024-11-28 06:41:33.609624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:16:22.852 [2024-11-28 06:41:33.609631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.852 [2024-11-28 06:41:33.610773] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:22.853 [2024-11-28 06:41:33.612951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.853 [2024-11-28 06:41:33.613093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:22.853 [2024-11-28 06:41:33.613109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.185 ms 00:16:22.853 [2024-11-28 06:41:33.613119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.853 [2024-11-28 06:41:33.613180] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.853 [2024-11-28 06:41:33.613195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:22.853 [2024-11-28 06:41:33.613205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:22.853 [2024-11-28 06:41:33.613214] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.853 [2024-11-28 06:41:33.617989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.853 [2024-11-28 06:41:33.618022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:22.853 [2024-11-28 06:41:33.618034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.721 ms 00:16:22.853 [2024-11-28 06:41:33.618046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.853 [2024-11-28 06:41:33.618138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.853 [2024-11-28 06:41:33.618152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:22.853 [2024-11-28 06:41:33.618161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:16:22.853 [2024-11-28 06:41:33.618171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.853 [2024-11-28 06:41:33.618194] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.853 [2024-11-28 06:41:33.618205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:22.853 [2024-11-28 06:41:33.618212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:22.853 [2024-11-28 06:41:33.618221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.853 [2024-11-28 06:41:33.618245] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:23.113 [2024-11-28 06:41:33.619556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.113 [2024-11-28 06:41:33.619584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:23.113 [2024-11-28 06:41:33.619595] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.308 ms 00:16:23.113 [2024-11-28 06:41:33.619606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.113 [2024-11-28 06:41:33.619650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.113 [2024-11-28 06:41:33.619660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:23.113 [2024-11-28 06:41:33.619669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:23.113 [2024-11-28 06:41:33.619676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.113 [2024-11-28 06:41:33.619700] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:23.113 [2024-11-28 06:41:33.619735] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:23.113 [2024-11-28 06:41:33.619776] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:23.113 [2024-11-28 06:41:33.619796] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:23.113 [2024-11-28 06:41:33.619886] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:23.113 [2024-11-28 06:41:33.619898] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:23.113 [2024-11-28 06:41:33.619911] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:23.113 [2024-11-28 06:41:33.619921] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:23.113 [2024-11-28 06:41:33.619933] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:23.113 [2024-11-28 06:41:33.619944] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:23.113 [2024-11-28 06:41:33.619955] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:23.113 [2024-11-28 06:41:33.619963] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:23.113 [2024-11-28 06:41:33.619972] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:23.113 [2024-11-28 06:41:33.619986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.113 [2024-11-28 06:41:33.619995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:23.113 [2024-11-28 06:41:33.620009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:16:23.113 [2024-11-28 06:41:33.620024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.113 [2024-11-28 06:41:33.620101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.113 [2024-11-28 06:41:33.620118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:23.113 [2024-11-28 06:41:33.620129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:23.113 [2024-11-28 06:41:33.620145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.113 [2024-11-28 06:41:33.620235] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:23.113 [2024-11-28 06:41:33.620250] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:23.113 [2024-11-28 06:41:33.620264] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:23.113 [2024-11-28 06:41:33.620276] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:23.113 [2024-11-28 06:41:33.620288] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:23.113 [2024-11-28 06:41:33.620301] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:23.113 [2024-11-28 06:41:33.620309] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:23.113 [2024-11-28 06:41:33.620324] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:23.113 [2024-11-28 06:41:33.620336] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:23.113 [2024-11-28 06:41:33.620348] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:23.113 [2024-11-28 06:41:33.620355] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:23.113 [2024-11-28 06:41:33.620364] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:23.113 [2024-11-28 06:41:33.620373] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:23.113 [2024-11-28 06:41:33.620384] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:23.113 [2024-11-28 06:41:33.620392] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:23.113 [2024-11-28 06:41:33.620420] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:23.113 [2024-11-28 06:41:33.620429] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:23.113 [2024-11-28 06:41:33.620441] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:23.113 [2024-11-28 06:41:33.620452] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:23.113 [2024-11-28 06:41:33.620465] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:23.113 [2024-11-28 06:41:33.620476] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:23.113 [2024-11-28 06:41:33.620485] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:23.113 [2024-11-28 06:41:33.620497] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:23.113 [2024-11-28 06:41:33.620511] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:23.113 [2024-11-28 06:41:33.620521] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:23.113 [2024-11-28 06:41:33.620531] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:23.114 [2024-11-28 06:41:33.620540] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:23.114 [2024-11-28 06:41:33.620550] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:23.114 [2024-11-28 06:41:33.620559] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:23.114 [2024-11-28 06:41:33.620572] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:23.114 [2024-11-28 06:41:33.620581] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:23.114 [2024-11-28 06:41:33.620595] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:23.114 [2024-11-28 06:41:33.620602] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:23.114 [2024-11-28 06:41:33.620610] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:23.114 [2024-11-28 06:41:33.620616] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:23.114 [2024-11-28 06:41:33.620628] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:23.114 [2024-11-28 06:41:33.620637] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:23.114 [2024-11-28 06:41:33.620647] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:23.114 [2024-11-28 06:41:33.620653] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:23.114 [2024-11-28 06:41:33.620663] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:23.114 [2024-11-28 06:41:33.620672] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:23.114 [2024-11-28 06:41:33.620682] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:23.114 [2024-11-28 06:41:33.620698] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:23.114 [2024-11-28 06:41:33.620743] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:23.114 [2024-11-28 06:41:33.620752] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:23.114 [2024-11-28 06:41:33.620765] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:23.114 [2024-11-28 06:41:33.620772] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:23.114 [2024-11-28 06:41:33.620780] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:23.114 [2024-11-28 06:41:33.620790] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:23.114 [2024-11-28 06:41:33.620801] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:23.114 [2024-11-28 06:41:33.620809] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:23.114 [2024-11-28 06:41:33.620828] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:23.114 [2024-11-28 06:41:33.620840] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:23.114 [2024-11-28 06:41:33.620849] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:23.114 [2024-11-28 06:41:33.620855] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:23.114 [2024-11-28 06:41:33.620867] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:23.114 [2024-11-28 06:41:33.620875] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:23.114 [2024-11-28 06:41:33.620884] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:23.114 [2024-11-28 06:41:33.620891] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:23.114 [2024-11-28 06:41:33.620902] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:23.114 [2024-11-28 06:41:33.620910] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:23.114 [2024-11-28 06:41:33.620919] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:23.114 [2024-11-28 06:41:33.620926] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:23.114 [2024-11-28 06:41:33.620937] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:23.114 [2024-11-28 06:41:33.620951] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:23.114 [2024-11-28 06:41:33.620960] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:23.114 [2024-11-28 06:41:33.620971] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:23.114 [2024-11-28 06:41:33.620986] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:23.114 [2024-11-28 06:41:33.620996] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:23.114 [2024-11-28 06:41:33.621006] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:23.114 [2024-11-28 06:41:33.621015] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:23.114 [2024-11-28 06:41:33.621024] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.114 [2024-11-28 06:41:33.621033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:23.114 [2024-11-28 06:41:33.621044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.843 ms 00:16:23.114 [2024-11-28 06:41:33.621051] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.114 [2024-11-28 06:41:33.627388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.114 [2024-11-28 06:41:33.627504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:23.114 [2024-11-28 06:41:33.627524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.277 ms 00:16:23.114 [2024-11-28 06:41:33.627533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.114 [2024-11-28 06:41:33.627643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.114 [2024-11-28 06:41:33.627654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:23.114 [2024-11-28 06:41:33.627668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:16:23.114 [2024-11-28 06:41:33.627677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.114 [2024-11-28 06:41:33.636628] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.114 [2024-11-28 06:41:33.636660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:23.114 [2024-11-28 06:41:33.636671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.928 ms 00:16:23.114 [2024-11-28 06:41:33.636679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.114 [2024-11-28 06:41:33.636746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.114 [2024-11-28 06:41:33.636757] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:23.114 [2024-11-28 06:41:33.636767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:23.114 [2024-11-28 06:41:33.636779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.114 [2024-11-28 06:41:33.637086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.114 [2024-11-28 06:41:33.637111] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:23.114 [2024-11-28 06:41:33.637124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:16:23.114 [2024-11-28 06:41:33.637131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.114 [2024-11-28 06:41:33.637246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.114 [2024-11-28 06:41:33.637255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:23.114 [2024-11-28 06:41:33.637266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:16:23.114 [2024-11-28 06:41:33.637274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.114 [2024-11-28 06:41:33.642616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.114 [2024-11-28 06:41:33.642755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:23.114 [2024-11-28 06:41:33.642774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.316 ms 00:16:23.114 [2024-11-28 06:41:33.642788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.114 [2024-11-28 06:41:33.645303] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:16:23.114 [2024-11-28 06:41:33.645337] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:23.114 [2024-11-28 06:41:33.645349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.114 [2024-11-28 06:41:33.645358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:23.114 [2024-11-28 06:41:33.645368] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.464 ms 00:16:23.114 [2024-11-28 06:41:33.645375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.114 [2024-11-28 06:41:33.659816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.114 [2024-11-28 06:41:33.659848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:23.114 [2024-11-28 06:41:33.659865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.397 ms 00:16:23.114 [2024-11-28 06:41:33.659874] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.114 [2024-11-28 06:41:33.661771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.114 [2024-11-28 06:41:33.661800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:23.114 [2024-11-28 06:41:33.661813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.826 ms 00:16:23.114 [2024-11-28 06:41:33.661821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.114 [2024-11-28 06:41:33.663668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.114 [2024-11-28 06:41:33.663697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:23.114 [2024-11-28 06:41:33.663726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.808 ms 00:16:23.114 [2024-11-28 06:41:33.663734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.114 [2024-11-28 06:41:33.663930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.114 [2024-11-28 06:41:33.663946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:23.114 [2024-11-28 06:41:33.663956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:16:23.114 [2024-11-28 06:41:33.663971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.114 [2024-11-28 06:41:33.681507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.114 [2024-11-28 06:41:33.681641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:23.115 [2024-11-28 06:41:33.681661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.503 ms 00:16:23.115 [2024-11-28 06:41:33.681674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.115 [2024-11-28 06:41:33.688964] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:23.115 [2024-11-28 06:41:33.702270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.115 [2024-11-28 06:41:33.702305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:23.115 [2024-11-28 06:41:33.702316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.508 ms 00:16:23.115 [2024-11-28 06:41:33.702325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.115 [2024-11-28 06:41:33.702381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.115 [2024-11-28 06:41:33.702392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:23.115 [2024-11-28 06:41:33.702400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:23.115 [2024-11-28 06:41:33.702409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.115 [2024-11-28 06:41:33.702460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.115 [2024-11-28 06:41:33.702470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:23.115 [2024-11-28 06:41:33.702478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:23.115 [2024-11-28 06:41:33.702488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.115 [2024-11-28 06:41:33.703647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.115 [2024-11-28 06:41:33.703679] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:23.115 [2024-11-28 06:41:33.703691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.139 ms 00:16:23.115 [2024-11-28 06:41:33.703720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.115 [2024-11-28 06:41:33.703749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.115 [2024-11-28 06:41:33.703761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:23.115 [2024-11-28 06:41:33.703768] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:23.115 [2024-11-28 06:41:33.703778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.115 [2024-11-28 06:41:33.703811] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:23.115 [2024-11-28 06:41:33.703823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.115 [2024-11-28 06:41:33.703830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:23.115 [2024-11-28 06:41:33.703843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:23.115 [2024-11-28 06:41:33.703850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.115 [2024-11-28 06:41:33.707815] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.115 [2024-11-28 06:41:33.707846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:23.115 [2024-11-28 06:41:33.707858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.939 ms 00:16:23.115 [2024-11-28 06:41:33.707869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.115 [2024-11-28 06:41:33.707937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.115 [2024-11-28 06:41:33.707947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:23.115 [2024-11-28 06:41:33.707957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:16:23.115 [2024-11-28 06:41:33.707968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.115 [2024-11-28 06:41:33.708742] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:23.115 [2024-11-28 06:41:33.709718] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 105.438 ms, result 0 00:16:23.115 [2024-11-28 06:41:33.711623] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:23.115 Some configs were skipped because the RPC state that can call them passed over. 00:16:23.115 06:41:33 -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:23.374 [2024-11-28 06:41:33.916064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.374 [2024-11-28 06:41:33.916120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:23.374 [2024-11-28 06:41:33.916133] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.065 ms 00:16:23.374 [2024-11-28 06:41:33.916143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.374 [2024-11-28 06:41:33.916177] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 5.183 ms, result 0 00:16:23.374 true 00:16:23.374 06:41:33 -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:23.374 [2024-11-28 06:41:34.107832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.374 [2024-11-28 06:41:34.107871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:23.374 [2024-11-28 06:41:34.107884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.547 ms 00:16:23.374 [2024-11-28 06:41:34.107891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.374 [2024-11-28 06:41:34.107929] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 4.644 ms, result 0 00:16:23.374 true 00:16:23.374 06:41:34 -- ftl/trim.sh@102 -- # killprocess 83260 00:16:23.374 06:41:34 -- common/autotest_common.sh@936 -- # '[' -z 83260 ']' 00:16:23.374 06:41:34 -- common/autotest_common.sh@940 -- # kill -0 83260 00:16:23.374 06:41:34 -- common/autotest_common.sh@941 -- # uname 00:16:23.374 06:41:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:23.374 06:41:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83260 00:16:23.634 killing process with pid 83260 00:16:23.634 06:41:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:23.634 06:41:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:23.634 06:41:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83260' 00:16:23.634 06:41:34 -- common/autotest_common.sh@955 -- # kill 83260 00:16:23.634 06:41:34 -- common/autotest_common.sh@960 -- # wait 83260 00:16:23.634 [2024-11-28 06:41:34.243141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.634 [2024-11-28 06:41:34.243192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:23.634 [2024-11-28 06:41:34.243206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:23.634 [2024-11-28 06:41:34.243217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.634 [2024-11-28 06:41:34.243240] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:23.634 [2024-11-28 06:41:34.243652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.634 [2024-11-28 06:41:34.243668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:23.634 [2024-11-28 06:41:34.243683] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:16:23.634 [2024-11-28 06:41:34.243691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.634 [2024-11-28 06:41:34.243999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.634 [2024-11-28 06:41:34.244010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:23.634 [2024-11-28 06:41:34.244021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:16:23.634 [2024-11-28 06:41:34.244032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.634 [2024-11-28 06:41:34.248595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.634 [2024-11-28 06:41:34.248628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:23.634 [2024-11-28 06:41:34.248639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.541 ms 00:16:23.634 [2024-11-28 06:41:34.248646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.634 [2024-11-28 06:41:34.255555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.634 [2024-11-28 06:41:34.255597] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:23.634 [2024-11-28 06:41:34.255611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.859 ms 00:16:23.634 [2024-11-28 06:41:34.255619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.634 [2024-11-28 06:41:34.257816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.634 [2024-11-28 06:41:34.257931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:23.634 [2024-11-28 06:41:34.257949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.136 ms 00:16:23.634 [2024-11-28 06:41:34.257957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.634 [2024-11-28 06:41:34.261871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.634 [2024-11-28 06:41:34.261902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:23.634 [2024-11-28 06:41:34.261913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.875 ms 00:16:23.634 [2024-11-28 06:41:34.261921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.634 [2024-11-28 06:41:34.262051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.634 [2024-11-28 06:41:34.262065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:23.634 [2024-11-28 06:41:34.262075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:16:23.634 [2024-11-28 06:41:34.262085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.634 [2024-11-28 06:41:34.264702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.634 [2024-11-28 06:41:34.264739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:23.634 [2024-11-28 06:41:34.264752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.595 ms 00:16:23.634 [2024-11-28 06:41:34.264759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.634 [2024-11-28 06:41:34.266968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.634 [2024-11-28 06:41:34.266997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:23.634 [2024-11-28 06:41:34.267010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.171 ms 00:16:23.634 [2024-11-28 06:41:34.267017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.634 [2024-11-28 06:41:34.268815] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.634 [2024-11-28 06:41:34.268844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:23.634 [2024-11-28 06:41:34.268855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.762 ms 00:16:23.634 [2024-11-28 06:41:34.268862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.634 [2024-11-28 06:41:34.270449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.634 [2024-11-28 06:41:34.270478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:23.634 [2024-11-28 06:41:34.270488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.513 ms 00:16:23.634 [2024-11-28 06:41:34.270495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.634 [2024-11-28 06:41:34.270528] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:23.634 [2024-11-28 06:41:34.270542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:23.634 [2024-11-28 06:41:34.270555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:23.634 [2024-11-28 06:41:34.270565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:23.634 [2024-11-28 06:41:34.270574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:23.634 [2024-11-28 06:41:34.270581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:23.634 [2024-11-28 06:41:34.270590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:23.634 [2024-11-28 06:41:34.270599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:23.634 [2024-11-28 06:41:34.270608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:23.634 [2024-11-28 06:41:34.270616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:23.634 [2024-11-28 06:41:34.270625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:23.634 [2024-11-28 06:41:34.270632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:23.634 [2024-11-28 06:41:34.270642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:23.634 [2024-11-28 06:41:34.270650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:23.634 [2024-11-28 06:41:34.270660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:23.634 [2024-11-28 06:41:34.270667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:23.634 [2024-11-28 06:41:34.270677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.270991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:23.635 [2024-11-28 06:41:34.271422] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:23.635 [2024-11-28 06:41:34.271432] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ea43bc93-5675-45fc-ae12-8201ac6e92f9 00:16:23.635 [2024-11-28 06:41:34.271439] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:23.635 [2024-11-28 06:41:34.271448] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:23.635 [2024-11-28 06:41:34.271455] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:23.635 [2024-11-28 06:41:34.271464] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:23.635 [2024-11-28 06:41:34.271471] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:23.636 [2024-11-28 06:41:34.271480] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:23.636 [2024-11-28 06:41:34.271487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:23.636 [2024-11-28 06:41:34.271495] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:23.636 [2024-11-28 06:41:34.271501] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:23.636 [2024-11-28 06:41:34.271511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.636 [2024-11-28 06:41:34.271519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:23.636 [2024-11-28 06:41:34.271530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.984 ms 00:16:23.636 [2024-11-28 06:41:34.271542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.636 [2024-11-28 06:41:34.272927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.636 [2024-11-28 06:41:34.272946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:23.636 [2024-11-28 06:41:34.272957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.363 ms 00:16:23.636 [2024-11-28 06:41:34.272967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.636 [2024-11-28 06:41:34.273035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:23.636 [2024-11-28 06:41:34.273044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:23.636 [2024-11-28 06:41:34.273053] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:16:23.636 [2024-11-28 06:41:34.273060] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.636 [2024-11-28 06:41:34.278103] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.636 [2024-11-28 06:41:34.278132] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:23.636 [2024-11-28 06:41:34.278143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.636 [2024-11-28 06:41:34.278151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.636 [2024-11-28 06:41:34.278218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.636 [2024-11-28 06:41:34.278228] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:23.636 [2024-11-28 06:41:34.278239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.636 [2024-11-28 06:41:34.278247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.636 [2024-11-28 06:41:34.278285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.636 [2024-11-28 06:41:34.278294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:23.636 [2024-11-28 06:41:34.278303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.636 [2024-11-28 06:41:34.278311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.636 [2024-11-28 06:41:34.278332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.636 [2024-11-28 06:41:34.278341] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:23.636 [2024-11-28 06:41:34.278349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.636 [2024-11-28 06:41:34.278357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.636 [2024-11-28 06:41:34.287632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.636 [2024-11-28 06:41:34.287670] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:23.636 [2024-11-28 06:41:34.287682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.636 [2024-11-28 06:41:34.287690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.636 [2024-11-28 06:41:34.291265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.636 [2024-11-28 06:41:34.291420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:23.636 [2024-11-28 06:41:34.291447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.636 [2024-11-28 06:41:34.291455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.636 [2024-11-28 06:41:34.291485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.636 [2024-11-28 06:41:34.291495] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:23.636 [2024-11-28 06:41:34.291505] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.636 [2024-11-28 06:41:34.291512] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.636 [2024-11-28 06:41:34.291546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.636 [2024-11-28 06:41:34.291557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:23.636 [2024-11-28 06:41:34.291567] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.636 [2024-11-28 06:41:34.291574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.636 [2024-11-28 06:41:34.291645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.636 [2024-11-28 06:41:34.291656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:23.636 [2024-11-28 06:41:34.291666] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.636 [2024-11-28 06:41:34.291673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.636 [2024-11-28 06:41:34.291723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.636 [2024-11-28 06:41:34.291734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:23.636 [2024-11-28 06:41:34.291747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.636 [2024-11-28 06:41:34.291755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.636 [2024-11-28 06:41:34.291800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.636 [2024-11-28 06:41:34.291810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:23.636 [2024-11-28 06:41:34.291820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.636 [2024-11-28 06:41:34.291827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.636 [2024-11-28 06:41:34.291870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:23.636 [2024-11-28 06:41:34.291880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:23.636 [2024-11-28 06:41:34.291892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:23.636 [2024-11-28 06:41:34.291901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:23.636 [2024-11-28 06:41:34.292033] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 48.864 ms, result 0 00:16:23.894 06:41:34 -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:23.894 [2024-11-28 06:41:34.520269] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:23.894 [2024-11-28 06:41:34.520390] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83296 ] 00:16:23.894 [2024-11-28 06:41:34.655072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.152 [2024-11-28 06:41:34.685911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.152 [2024-11-28 06:41:34.769933] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:24.152 [2024-11-28 06:41:34.770002] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:24.152 [2024-11-28 06:41:34.918980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.152 [2024-11-28 06:41:34.919023] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:24.152 [2024-11-28 06:41:34.919041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:24.152 [2024-11-28 06:41:34.919052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.152 [2024-11-28 06:41:34.921245] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.152 [2024-11-28 06:41:34.921417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:24.152 [2024-11-28 06:41:34.921434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.176 ms 00:16:24.152 [2024-11-28 06:41:34.921442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.152 [2024-11-28 06:41:34.921506] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:24.152 [2024-11-28 06:41:34.921764] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:24.152 [2024-11-28 06:41:34.921780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.152 [2024-11-28 06:41:34.921788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:24.152 [2024-11-28 06:41:34.921800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:16:24.152 [2024-11-28 06:41:34.921807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.413 [2024-11-28 06:41:34.922896] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:24.413 [2024-11-28 06:41:34.925330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.413 [2024-11-28 06:41:34.925365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:24.413 [2024-11-28 06:41:34.925375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.435 ms 00:16:24.413 [2024-11-28 06:41:34.925383] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.413 [2024-11-28 06:41:34.925442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.413 [2024-11-28 06:41:34.925453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:24.413 [2024-11-28 06:41:34.925461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:16:24.413 [2024-11-28 06:41:34.925470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.413 [2024-11-28 06:41:34.930203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.413 [2024-11-28 06:41:34.930230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:24.413 [2024-11-28 06:41:34.930239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.694 ms 00:16:24.413 [2024-11-28 06:41:34.930247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.413 [2024-11-28 06:41:34.930336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.413 [2024-11-28 06:41:34.930347] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:24.413 [2024-11-28 06:41:34.930357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:16:24.413 [2024-11-28 06:41:34.930365] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.413 [2024-11-28 06:41:34.930389] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.413 [2024-11-28 06:41:34.930401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:24.413 [2024-11-28 06:41:34.930408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:24.413 [2024-11-28 06:41:34.930421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.413 [2024-11-28 06:41:34.930443] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:24.413 [2024-11-28 06:41:34.931774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.413 [2024-11-28 06:41:34.931897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:24.413 [2024-11-28 06:41:34.931920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.338 ms 00:16:24.413 [2024-11-28 06:41:34.931928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.413 [2024-11-28 06:41:34.931968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.413 [2024-11-28 06:41:34.931980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:24.413 [2024-11-28 06:41:34.931988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:24.413 [2024-11-28 06:41:34.931995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.413 [2024-11-28 06:41:34.932013] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:24.413 [2024-11-28 06:41:34.932029] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:24.413 [2024-11-28 06:41:34.932063] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:24.413 [2024-11-28 06:41:34.932078] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:24.413 [2024-11-28 06:41:34.932150] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:24.413 [2024-11-28 06:41:34.932163] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:24.413 [2024-11-28 06:41:34.932173] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:24.413 [2024-11-28 06:41:34.932184] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:24.413 [2024-11-28 06:41:34.932192] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:24.413 [2024-11-28 06:41:34.932200] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:24.413 [2024-11-28 06:41:34.932207] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:24.413 [2024-11-28 06:41:34.932218] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:24.413 [2024-11-28 06:41:34.932225] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:24.413 [2024-11-28 06:41:34.932232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.413 [2024-11-28 06:41:34.932239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:24.413 [2024-11-28 06:41:34.932248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:16:24.413 [2024-11-28 06:41:34.932254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.413 [2024-11-28 06:41:34.932319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.413 [2024-11-28 06:41:34.932328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:24.413 [2024-11-28 06:41:34.932336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:16:24.413 [2024-11-28 06:41:34.932346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.413 [2024-11-28 06:41:34.932432] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:24.413 [2024-11-28 06:41:34.932443] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:24.413 [2024-11-28 06:41:34.932451] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:24.413 [2024-11-28 06:41:34.932465] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.413 [2024-11-28 06:41:34.932474] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:24.413 [2024-11-28 06:41:34.932481] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:24.413 [2024-11-28 06:41:34.932489] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:24.413 [2024-11-28 06:41:34.932498] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:24.413 [2024-11-28 06:41:34.932507] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:24.413 [2024-11-28 06:41:34.932514] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:24.413 [2024-11-28 06:41:34.932522] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:24.413 [2024-11-28 06:41:34.932530] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:24.413 [2024-11-28 06:41:34.932537] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:24.413 [2024-11-28 06:41:34.932545] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:24.413 [2024-11-28 06:41:34.932552] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:24.413 [2024-11-28 06:41:34.932561] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.413 [2024-11-28 06:41:34.932570] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:24.413 [2024-11-28 06:41:34.932577] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:24.413 [2024-11-28 06:41:34.932584] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.413 [2024-11-28 06:41:34.932592] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:24.413 [2024-11-28 06:41:34.932600] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:24.413 [2024-11-28 06:41:34.932607] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:24.413 [2024-11-28 06:41:34.932615] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:24.413 [2024-11-28 06:41:34.932623] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:24.413 [2024-11-28 06:41:34.932631] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:24.413 [2024-11-28 06:41:34.932638] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:24.413 [2024-11-28 06:41:34.932646] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:24.413 [2024-11-28 06:41:34.932653] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:24.413 [2024-11-28 06:41:34.932662] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:24.413 [2024-11-28 06:41:34.932669] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:24.413 [2024-11-28 06:41:34.932677] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:24.413 [2024-11-28 06:41:34.932687] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:24.413 [2024-11-28 06:41:34.932695] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:24.413 [2024-11-28 06:41:34.932720] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:24.413 [2024-11-28 06:41:34.932729] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:24.413 [2024-11-28 06:41:34.932736] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:24.413 [2024-11-28 06:41:34.932744] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:24.413 [2024-11-28 06:41:34.932752] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:24.413 [2024-11-28 06:41:34.932759] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:24.413 [2024-11-28 06:41:34.932767] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:24.413 [2024-11-28 06:41:34.932774] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:24.414 [2024-11-28 06:41:34.932783] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:24.414 [2024-11-28 06:41:34.932791] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:24.414 [2024-11-28 06:41:34.932798] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.414 [2024-11-28 06:41:34.932808] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:24.414 [2024-11-28 06:41:34.932815] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:24.414 [2024-11-28 06:41:34.932823] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:24.414 [2024-11-28 06:41:34.932832] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:24.414 [2024-11-28 06:41:34.932840] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:24.414 [2024-11-28 06:41:34.932848] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:24.414 [2024-11-28 06:41:34.932856] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:24.414 [2024-11-28 06:41:34.932867] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:24.414 [2024-11-28 06:41:34.932878] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:24.414 [2024-11-28 06:41:34.932887] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:24.414 [2024-11-28 06:41:34.932895] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:24.414 [2024-11-28 06:41:34.932903] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:24.414 [2024-11-28 06:41:34.932911] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:24.414 [2024-11-28 06:41:34.932920] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:24.414 [2024-11-28 06:41:34.932927] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:24.414 [2024-11-28 06:41:34.932935] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:24.414 [2024-11-28 06:41:34.932945] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:24.414 [2024-11-28 06:41:34.932953] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:24.414 [2024-11-28 06:41:34.932960] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:24.414 [2024-11-28 06:41:34.932969] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:24.414 [2024-11-28 06:41:34.932978] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:24.414 [2024-11-28 06:41:34.932985] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:24.414 [2024-11-28 06:41:34.932996] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:24.414 [2024-11-28 06:41:34.933013] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:24.414 [2024-11-28 06:41:34.933020] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:24.414 [2024-11-28 06:41:34.933027] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:24.414 [2024-11-28 06:41:34.933034] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:24.414 [2024-11-28 06:41:34.933042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.414 [2024-11-28 06:41:34.933054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:24.414 [2024-11-28 06:41:34.933062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.662 ms 00:16:24.414 [2024-11-28 06:41:34.933068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.414 [2024-11-28 06:41:34.938963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.414 [2024-11-28 06:41:34.939080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:24.414 [2024-11-28 06:41:34.939094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.850 ms 00:16:24.414 [2024-11-28 06:41:34.939110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.414 [2024-11-28 06:41:34.939221] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.414 [2024-11-28 06:41:34.939232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:24.414 [2024-11-28 06:41:34.939244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:16:24.414 [2024-11-28 06:41:34.939251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.414 [2024-11-28 06:41:34.956839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.414 [2024-11-28 06:41:34.956881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:24.414 [2024-11-28 06:41:34.956895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.565 ms 00:16:24.414 [2024-11-28 06:41:34.956909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.414 [2024-11-28 06:41:34.956988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.414 [2024-11-28 06:41:34.957000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:24.414 [2024-11-28 06:41:34.957010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:24.414 [2024-11-28 06:41:34.957018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.414 [2024-11-28 06:41:34.957335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.414 [2024-11-28 06:41:34.957353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:24.414 [2024-11-28 06:41:34.957370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:16:24.414 [2024-11-28 06:41:34.957380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.414 [2024-11-28 06:41:34.957516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.414 [2024-11-28 06:41:34.957528] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:24.414 [2024-11-28 06:41:34.957539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:16:24.414 [2024-11-28 06:41:34.957552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.414 [2024-11-28 06:41:34.963038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.414 [2024-11-28 06:41:34.963072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:24.414 [2024-11-28 06:41:34.963081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.457 ms 00:16:24.414 [2024-11-28 06:41:34.963089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.414 [2024-11-28 06:41:34.965664] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:16:24.414 [2024-11-28 06:41:34.965699] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:24.414 [2024-11-28 06:41:34.965729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.414 [2024-11-28 06:41:34.965737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:24.414 [2024-11-28 06:41:34.965745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.559 ms 00:16:24.414 [2024-11-28 06:41:34.965753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.414 [2024-11-28 06:41:34.980657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.414 [2024-11-28 06:41:34.980696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:24.414 [2024-11-28 06:41:34.980717] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.859 ms 00:16:24.414 [2024-11-28 06:41:34.980725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.414 [2024-11-28 06:41:34.982880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.414 [2024-11-28 06:41:34.982911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:24.414 [2024-11-28 06:41:34.982920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.079 ms 00:16:24.414 [2024-11-28 06:41:34.982927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.414 [2024-11-28 06:41:34.984481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.414 [2024-11-28 06:41:34.984614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:24.414 [2024-11-28 06:41:34.984628] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.516 ms 00:16:24.414 [2024-11-28 06:41:34.984635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.414 [2024-11-28 06:41:34.984854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.414 [2024-11-28 06:41:34.984866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:24.414 [2024-11-28 06:41:34.984876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:16:24.414 [2024-11-28 06:41:34.984887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.414 [2024-11-28 06:41:35.002747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.414 [2024-11-28 06:41:35.002891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:24.414 [2024-11-28 06:41:35.002907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.828 ms 00:16:24.414 [2024-11-28 06:41:35.002916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.414 [2024-11-28 06:41:35.010273] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:24.414 [2024-11-28 06:41:35.024342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.414 [2024-11-28 06:41:35.024377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:24.414 [2024-11-28 06:41:35.024389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.357 ms 00:16:24.414 [2024-11-28 06:41:35.024397] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.414 [2024-11-28 06:41:35.024475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.415 [2024-11-28 06:41:35.024488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:24.415 [2024-11-28 06:41:35.024497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:24.415 [2024-11-28 06:41:35.024504] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.415 [2024-11-28 06:41:35.024563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.415 [2024-11-28 06:41:35.024572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:24.415 [2024-11-28 06:41:35.024581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:16:24.415 [2024-11-28 06:41:35.024588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.415 [2024-11-28 06:41:35.025823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.415 [2024-11-28 06:41:35.025851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:24.415 [2024-11-28 06:41:35.025865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.212 ms 00:16:24.415 [2024-11-28 06:41:35.025873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.415 [2024-11-28 06:41:35.025909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.415 [2024-11-28 06:41:35.025918] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:24.415 [2024-11-28 06:41:35.025926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:24.415 [2024-11-28 06:41:35.025936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.415 [2024-11-28 06:41:35.025968] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:24.415 [2024-11-28 06:41:35.025978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.415 [2024-11-28 06:41:35.025986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:24.415 [2024-11-28 06:41:35.025994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:24.415 [2024-11-28 06:41:35.026003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.415 [2024-11-28 06:41:35.029976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.415 [2024-11-28 06:41:35.030008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:24.415 [2024-11-28 06:41:35.030017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.950 ms 00:16:24.415 [2024-11-28 06:41:35.030025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.415 [2024-11-28 06:41:35.030094] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.415 [2024-11-28 06:41:35.030104] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:24.415 [2024-11-28 06:41:35.030113] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:16:24.415 [2024-11-28 06:41:35.030122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.415 [2024-11-28 06:41:35.030893] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:24.415 [2024-11-28 06:41:35.031861] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 111.633 ms, result 0 00:16:24.415 [2024-11-28 06:41:35.033340] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:24.415 [2024-11-28 06:41:35.041446] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:25.348  [2024-11-28T06:41:37.493Z] Copying: 39/256 [MB] (39 MBps) [2024-11-28T06:41:38.427Z] Copying: 70/256 [MB] (30 MBps) [2024-11-28T06:41:39.361Z] Copying: 96/256 [MB] (26 MBps) [2024-11-28T06:41:40.295Z] Copying: 118/256 [MB] (21 MBps) [2024-11-28T06:41:41.230Z] Copying: 139/256 [MB] (21 MBps) [2024-11-28T06:41:42.165Z] Copying: 158/256 [MB] (19 MBps) [2024-11-28T06:41:43.535Z] Copying: 180/256 [MB] (22 MBps) [2024-11-28T06:41:44.144Z] Copying: 197/256 [MB] (17 MBps) [2024-11-28T06:41:45.516Z] Copying: 218/256 [MB] (20 MBps) [2024-11-28T06:41:46.082Z] Copying: 244/256 [MB] (25 MBps) [2024-11-28T06:41:46.652Z] Copying: 256/256 [MB] (average 23 MBps)[2024-11-28 06:41:46.517402] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:35.882 [2024-11-28 06:41:46.518779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.882 [2024-11-28 06:41:46.518949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:35.882 [2024-11-28 06:41:46.518985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:35.882 [2024-11-28 06:41:46.518997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.882 [2024-11-28 06:41:46.519035] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:35.882 [2024-11-28 06:41:46.519513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.882 [2024-11-28 06:41:46.519541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:35.882 [2024-11-28 06:41:46.519555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:16:35.882 [2024-11-28 06:41:46.519567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.882 [2024-11-28 06:41:46.519990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.882 [2024-11-28 06:41:46.520014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:35.882 [2024-11-28 06:41:46.520028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:16:35.882 [2024-11-28 06:41:46.520039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.882 [2024-11-28 06:41:46.523971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.882 [2024-11-28 06:41:46.523994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:35.882 [2024-11-28 06:41:46.524004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.904 ms 00:16:35.882 [2024-11-28 06:41:46.524012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.882 [2024-11-28 06:41:46.532259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.882 [2024-11-28 06:41:46.532300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:35.882 [2024-11-28 06:41:46.532310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.214 ms 00:16:35.882 [2024-11-28 06:41:46.532317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.882 [2024-11-28 06:41:46.533874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.882 [2024-11-28 06:41:46.533905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:35.882 [2024-11-28 06:41:46.533915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.489 ms 00:16:35.882 [2024-11-28 06:41:46.533922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.882 [2024-11-28 06:41:46.536926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.882 [2024-11-28 06:41:46.536959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:35.882 [2024-11-28 06:41:46.536968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.971 ms 00:16:35.882 [2024-11-28 06:41:46.536976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.883 [2024-11-28 06:41:46.537098] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.883 [2024-11-28 06:41:46.537109] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:35.883 [2024-11-28 06:41:46.537117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:16:35.883 [2024-11-28 06:41:46.537124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.883 [2024-11-28 06:41:46.538924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.883 [2024-11-28 06:41:46.538953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:35.883 [2024-11-28 06:41:46.538962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.781 ms 00:16:35.883 [2024-11-28 06:41:46.538969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.883 [2024-11-28 06:41:46.540335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.883 [2024-11-28 06:41:46.540364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:35.883 [2024-11-28 06:41:46.540373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.337 ms 00:16:35.883 [2024-11-28 06:41:46.540381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.883 [2024-11-28 06:41:46.541554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.883 [2024-11-28 06:41:46.541584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:35.883 [2024-11-28 06:41:46.541592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.127 ms 00:16:35.883 [2024-11-28 06:41:46.541600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.883 [2024-11-28 06:41:46.543883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.883 [2024-11-28 06:41:46.543911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:35.883 [2024-11-28 06:41:46.543920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.225 ms 00:16:35.883 [2024-11-28 06:41:46.543926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.883 [2024-11-28 06:41:46.543958] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:35.883 [2024-11-28 06:41:46.543976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.543986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.543994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:35.883 [2024-11-28 06:41:46.544534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:35.884 [2024-11-28 06:41:46.544542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:35.884 [2024-11-28 06:41:46.544550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:35.884 [2024-11-28 06:41:46.544558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:35.884 [2024-11-28 06:41:46.544565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:35.884 [2024-11-28 06:41:46.544572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:35.884 [2024-11-28 06:41:46.544580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:35.884 [2024-11-28 06:41:46.544587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:35.884 [2024-11-28 06:41:46.544594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:35.884 [2024-11-28 06:41:46.544602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:35.884 [2024-11-28 06:41:46.544609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:35.884 [2024-11-28 06:41:46.544616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:35.884 [2024-11-28 06:41:46.544624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:35.884 [2024-11-28 06:41:46.544632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:35.884 [2024-11-28 06:41:46.544639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:35.884 [2024-11-28 06:41:46.544647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:35.884 [2024-11-28 06:41:46.544654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:35.884 [2024-11-28 06:41:46.544661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:35.884 [2024-11-28 06:41:46.544669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:35.884 [2024-11-28 06:41:46.544677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:35.884 [2024-11-28 06:41:46.544684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:35.884 [2024-11-28 06:41:46.544691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:35.884 [2024-11-28 06:41:46.544699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:35.884 [2024-11-28 06:41:46.544922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:35.884 [2024-11-28 06:41:46.544955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:35.884 [2024-11-28 06:41:46.544985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:35.884 [2024-11-28 06:41:46.545013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:35.884 [2024-11-28 06:41:46.545043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:35.884 [2024-11-28 06:41:46.545073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:35.884 [2024-11-28 06:41:46.545110] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:35.884 [2024-11-28 06:41:46.545131] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ea43bc93-5675-45fc-ae12-8201ac6e92f9 00:16:35.884 [2024-11-28 06:41:46.545160] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:35.884 [2024-11-28 06:41:46.545315] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:35.884 [2024-11-28 06:41:46.545324] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:35.884 [2024-11-28 06:41:46.545331] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:35.884 [2024-11-28 06:41:46.545345] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:35.884 [2024-11-28 06:41:46.545352] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:35.884 [2024-11-28 06:41:46.545359] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:35.884 [2024-11-28 06:41:46.545365] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:35.884 [2024-11-28 06:41:46.545372] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:35.884 [2024-11-28 06:41:46.545379] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.884 [2024-11-28 06:41:46.545387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:35.884 [2024-11-28 06:41:46.545399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.423 ms 00:16:35.884 [2024-11-28 06:41:46.545406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.884 [2024-11-28 06:41:46.546760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.884 [2024-11-28 06:41:46.546775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:35.884 [2024-11-28 06:41:46.546783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.325 ms 00:16:35.884 [2024-11-28 06:41:46.546791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.884 [2024-11-28 06:41:46.546846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.884 [2024-11-28 06:41:46.546858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:35.884 [2024-11-28 06:41:46.546866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:35.884 [2024-11-28 06:41:46.546873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.884 [2024-11-28 06:41:46.551737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.884 [2024-11-28 06:41:46.551845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:35.884 [2024-11-28 06:41:46.551860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.884 [2024-11-28 06:41:46.551869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.884 [2024-11-28 06:41:46.551930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.884 [2024-11-28 06:41:46.551944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:35.884 [2024-11-28 06:41:46.551952] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.884 [2024-11-28 06:41:46.551960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.884 [2024-11-28 06:41:46.551998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.884 [2024-11-28 06:41:46.552008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:35.884 [2024-11-28 06:41:46.552016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.884 [2024-11-28 06:41:46.552023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.884 [2024-11-28 06:41:46.552042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.884 [2024-11-28 06:41:46.552050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:35.884 [2024-11-28 06:41:46.552061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.884 [2024-11-28 06:41:46.552068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.884 [2024-11-28 06:41:46.560136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.884 [2024-11-28 06:41:46.560169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:35.884 [2024-11-28 06:41:46.560184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.884 [2024-11-28 06:41:46.560192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.884 [2024-11-28 06:41:46.563675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.884 [2024-11-28 06:41:46.563830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:35.884 [2024-11-28 06:41:46.563845] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.884 [2024-11-28 06:41:46.563853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.884 [2024-11-28 06:41:46.563877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.884 [2024-11-28 06:41:46.563885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:35.884 [2024-11-28 06:41:46.563893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.884 [2024-11-28 06:41:46.563901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.884 [2024-11-28 06:41:46.563937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.884 [2024-11-28 06:41:46.563945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:35.884 [2024-11-28 06:41:46.563953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.884 [2024-11-28 06:41:46.563963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.884 [2024-11-28 06:41:46.564027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.884 [2024-11-28 06:41:46.564037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:35.884 [2024-11-28 06:41:46.564048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.884 [2024-11-28 06:41:46.564055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.884 [2024-11-28 06:41:46.564084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.884 [2024-11-28 06:41:46.564093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:35.884 [2024-11-28 06:41:46.564100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.884 [2024-11-28 06:41:46.564108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.884 [2024-11-28 06:41:46.564144] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.884 [2024-11-28 06:41:46.564153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:35.884 [2024-11-28 06:41:46.564160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.884 [2024-11-28 06:41:46.564169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.884 [2024-11-28 06:41:46.564215] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.884 [2024-11-28 06:41:46.564225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:35.884 [2024-11-28 06:41:46.564233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.884 [2024-11-28 06:41:46.564243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.884 [2024-11-28 06:41:46.564375] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 45.584 ms, result 0 00:16:36.143 00:16:36.143 00:16:36.143 06:41:46 -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:16:36.709 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:16:36.709 06:41:47 -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:16:36.709 06:41:47 -- ftl/trim.sh@109 -- # fio_kill 00:16:36.709 06:41:47 -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:16:36.709 06:41:47 -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:36.710 06:41:47 -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:16:36.710 06:41:47 -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:36.710 06:41:47 -- ftl/trim.sh@20 -- # killprocess 83260 00:16:36.710 06:41:47 -- common/autotest_common.sh@936 -- # '[' -z 83260 ']' 00:16:36.710 06:41:47 -- common/autotest_common.sh@940 -- # kill -0 83260 00:16:36.710 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (83260) - No such process 00:16:36.710 Process with pid 83260 is not found 00:16:36.710 06:41:47 -- common/autotest_common.sh@963 -- # echo 'Process with pid 83260 is not found' 00:16:36.710 ************************************ 00:16:36.710 END TEST ftl_trim 00:16:36.710 ************************************ 00:16:36.710 00:16:36.710 real 0m52.881s 00:16:36.710 user 1m16.012s 00:16:36.710 sys 0m4.505s 00:16:36.710 06:41:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:36.710 06:41:47 -- common/autotest_common.sh@10 -- # set +x 00:16:36.710 06:41:47 -- ftl/ftl.sh@77 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:16:36.710 06:41:47 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:36.710 06:41:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:36.710 06:41:47 -- common/autotest_common.sh@10 -- # set +x 00:16:36.710 ************************************ 00:16:36.710 START TEST ftl_restore 00:16:36.710 ************************************ 00:16:36.710 06:41:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:16:36.969 * Looking for test storage... 00:16:36.969 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:36.969 06:41:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:36.969 06:41:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:36.969 06:41:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:36.969 06:41:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:36.969 06:41:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:36.969 06:41:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:36.969 06:41:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:36.969 06:41:47 -- scripts/common.sh@335 -- # IFS=.-: 00:16:36.969 06:41:47 -- scripts/common.sh@335 -- # read -ra ver1 00:16:36.969 06:41:47 -- scripts/common.sh@336 -- # IFS=.-: 00:16:36.969 06:41:47 -- scripts/common.sh@336 -- # read -ra ver2 00:16:36.969 06:41:47 -- scripts/common.sh@337 -- # local 'op=<' 00:16:36.969 06:41:47 -- scripts/common.sh@339 -- # ver1_l=2 00:16:36.969 06:41:47 -- scripts/common.sh@340 -- # ver2_l=1 00:16:36.969 06:41:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:36.969 06:41:47 -- scripts/common.sh@343 -- # case "$op" in 00:16:36.969 06:41:47 -- scripts/common.sh@344 -- # : 1 00:16:36.969 06:41:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:36.969 06:41:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:36.969 06:41:47 -- scripts/common.sh@364 -- # decimal 1 00:16:36.969 06:41:47 -- scripts/common.sh@352 -- # local d=1 00:16:36.969 06:41:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:36.969 06:41:47 -- scripts/common.sh@354 -- # echo 1 00:16:36.969 06:41:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:36.969 06:41:47 -- scripts/common.sh@365 -- # decimal 2 00:16:36.969 06:41:47 -- scripts/common.sh@352 -- # local d=2 00:16:36.969 06:41:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:36.969 06:41:47 -- scripts/common.sh@354 -- # echo 2 00:16:36.969 06:41:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:36.969 06:41:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:36.969 06:41:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:36.969 06:41:47 -- scripts/common.sh@367 -- # return 0 00:16:36.969 06:41:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:36.969 06:41:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:36.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.969 --rc genhtml_branch_coverage=1 00:16:36.969 --rc genhtml_function_coverage=1 00:16:36.969 --rc genhtml_legend=1 00:16:36.969 --rc geninfo_all_blocks=1 00:16:36.969 --rc geninfo_unexecuted_blocks=1 00:16:36.969 00:16:36.969 ' 00:16:36.969 06:41:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:36.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.969 --rc genhtml_branch_coverage=1 00:16:36.969 --rc genhtml_function_coverage=1 00:16:36.969 --rc genhtml_legend=1 00:16:36.969 --rc geninfo_all_blocks=1 00:16:36.969 --rc geninfo_unexecuted_blocks=1 00:16:36.969 00:16:36.969 ' 00:16:36.969 06:41:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:36.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.969 --rc genhtml_branch_coverage=1 00:16:36.969 --rc genhtml_function_coverage=1 00:16:36.969 --rc genhtml_legend=1 00:16:36.969 --rc geninfo_all_blocks=1 00:16:36.969 --rc geninfo_unexecuted_blocks=1 00:16:36.969 00:16:36.969 ' 00:16:36.969 06:41:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:36.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.969 --rc genhtml_branch_coverage=1 00:16:36.969 --rc genhtml_function_coverage=1 00:16:36.969 --rc genhtml_legend=1 00:16:36.969 --rc geninfo_all_blocks=1 00:16:36.969 --rc geninfo_unexecuted_blocks=1 00:16:36.969 00:16:36.969 ' 00:16:36.969 06:41:47 -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:36.969 06:41:47 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:16:36.969 06:41:47 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:36.969 06:41:47 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:36.969 06:41:47 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:36.969 06:41:47 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:36.969 06:41:47 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:36.969 06:41:47 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:36.969 06:41:47 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:36.969 06:41:47 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:36.969 06:41:47 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:36.969 06:41:47 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:36.969 06:41:47 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:36.969 06:41:47 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:36.969 06:41:47 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:36.969 06:41:47 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:36.969 06:41:47 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:36.969 06:41:47 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:36.969 06:41:47 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:36.969 06:41:47 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:36.969 06:41:47 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:36.969 06:41:47 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:36.969 06:41:47 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:36.969 06:41:47 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:36.969 06:41:47 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:36.969 06:41:47 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:36.969 06:41:47 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:36.969 06:41:47 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:36.969 06:41:47 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:36.969 06:41:47 -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:36.969 06:41:47 -- ftl/restore.sh@13 -- # mktemp -d 00:16:36.969 06:41:47 -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.BJZ2afKP5z 00:16:36.969 06:41:47 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:16:36.969 06:41:47 -- ftl/restore.sh@16 -- # case $opt in 00:16:36.969 06:41:47 -- ftl/restore.sh@18 -- # nv_cache=0000:00:06.0 00:16:36.969 06:41:47 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:16:36.969 06:41:47 -- ftl/restore.sh@23 -- # shift 2 00:16:36.969 06:41:47 -- ftl/restore.sh@24 -- # device=0000:00:07.0 00:16:36.969 06:41:47 -- ftl/restore.sh@25 -- # timeout=240 00:16:36.969 06:41:47 -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:16:36.969 06:41:47 -- ftl/restore.sh@39 -- # svcpid=83499 00:16:36.969 06:41:47 -- ftl/restore.sh@41 -- # waitforlisten 83499 00:16:36.969 06:41:47 -- common/autotest_common.sh@829 -- # '[' -z 83499 ']' 00:16:36.969 06:41:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.969 06:41:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:36.969 06:41:47 -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:36.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.969 06:41:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.969 06:41:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:36.969 06:41:47 -- common/autotest_common.sh@10 -- # set +x 00:16:36.969 [2024-11-28 06:41:47.659852] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:36.969 [2024-11-28 06:41:47.660124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83499 ] 00:16:37.229 [2024-11-28 06:41:47.796349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.229 [2024-11-28 06:41:47.825500] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:37.229 [2024-11-28 06:41:47.825857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.797 06:41:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:37.797 06:41:48 -- common/autotest_common.sh@862 -- # return 0 00:16:37.797 06:41:48 -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:16:37.797 06:41:48 -- ftl/common.sh@54 -- # local name=nvme0 00:16:37.797 06:41:48 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:16:37.797 06:41:48 -- ftl/common.sh@56 -- # local size=103424 00:16:37.797 06:41:48 -- ftl/common.sh@59 -- # local base_bdev 00:16:37.797 06:41:48 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:16:38.056 06:41:48 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:38.056 06:41:48 -- ftl/common.sh@62 -- # local base_size 00:16:38.056 06:41:48 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:38.056 06:41:48 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:16:38.056 06:41:48 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:38.056 06:41:48 -- common/autotest_common.sh@1369 -- # local bs 00:16:38.056 06:41:48 -- common/autotest_common.sh@1370 -- # local nb 00:16:38.056 06:41:48 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:38.313 06:41:48 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:38.313 { 00:16:38.313 "name": "nvme0n1", 00:16:38.313 "aliases": [ 00:16:38.313 "15f5ecc0-2cd3-43f4-9f7d-e2570814a2cd" 00:16:38.313 ], 00:16:38.313 "product_name": "NVMe disk", 00:16:38.313 "block_size": 4096, 00:16:38.313 "num_blocks": 1310720, 00:16:38.313 "uuid": "15f5ecc0-2cd3-43f4-9f7d-e2570814a2cd", 00:16:38.313 "assigned_rate_limits": { 00:16:38.313 "rw_ios_per_sec": 0, 00:16:38.313 "rw_mbytes_per_sec": 0, 00:16:38.313 "r_mbytes_per_sec": 0, 00:16:38.313 "w_mbytes_per_sec": 0 00:16:38.313 }, 00:16:38.313 "claimed": true, 00:16:38.313 "claim_type": "read_many_write_one", 00:16:38.313 "zoned": false, 00:16:38.313 "supported_io_types": { 00:16:38.313 "read": true, 00:16:38.313 "write": true, 00:16:38.313 "unmap": true, 00:16:38.313 "write_zeroes": true, 00:16:38.313 "flush": true, 00:16:38.313 "reset": true, 00:16:38.313 "compare": true, 00:16:38.313 "compare_and_write": false, 00:16:38.313 "abort": true, 00:16:38.313 "nvme_admin": true, 00:16:38.313 "nvme_io": true 00:16:38.313 }, 00:16:38.313 "driver_specific": { 00:16:38.313 "nvme": [ 00:16:38.313 { 00:16:38.313 "pci_address": "0000:00:07.0", 00:16:38.313 "trid": { 00:16:38.313 "trtype": "PCIe", 00:16:38.313 "traddr": "0000:00:07.0" 00:16:38.313 }, 00:16:38.313 "ctrlr_data": { 00:16:38.313 "cntlid": 0, 00:16:38.313 "vendor_id": "0x1b36", 00:16:38.313 "model_number": "QEMU NVMe Ctrl", 00:16:38.313 "serial_number": "12341", 00:16:38.313 "firmware_revision": "8.0.0", 00:16:38.313 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:38.313 "oacs": { 00:16:38.313 "security": 0, 00:16:38.313 "format": 1, 00:16:38.313 "firmware": 0, 00:16:38.313 "ns_manage": 1 00:16:38.313 }, 00:16:38.313 "multi_ctrlr": false, 00:16:38.313 "ana_reporting": false 00:16:38.313 }, 00:16:38.313 "vs": { 00:16:38.313 "nvme_version": "1.4" 00:16:38.313 }, 00:16:38.313 "ns_data": { 00:16:38.313 "id": 1, 00:16:38.313 "can_share": false 00:16:38.313 } 00:16:38.313 } 00:16:38.313 ], 00:16:38.313 "mp_policy": "active_passive" 00:16:38.313 } 00:16:38.313 } 00:16:38.313 ]' 00:16:38.313 06:41:48 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:38.313 06:41:48 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:38.313 06:41:48 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:38.313 06:41:48 -- common/autotest_common.sh@1373 -- # nb=1310720 00:16:38.313 06:41:48 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:16:38.313 06:41:48 -- common/autotest_common.sh@1377 -- # echo 5120 00:16:38.313 06:41:48 -- ftl/common.sh@63 -- # base_size=5120 00:16:38.313 06:41:48 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:38.313 06:41:48 -- ftl/common.sh@67 -- # clear_lvols 00:16:38.313 06:41:48 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:38.313 06:41:48 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:38.584 06:41:49 -- ftl/common.sh@28 -- # stores=0e8f1e28-e12c-49a9-8346-a3f584cf9225 00:16:38.584 06:41:49 -- ftl/common.sh@29 -- # for lvs in $stores 00:16:38.584 06:41:49 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0e8f1e28-e12c-49a9-8346-a3f584cf9225 00:16:38.584 06:41:49 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:38.877 06:41:49 -- ftl/common.sh@68 -- # lvs=6eb48a59-6d7e-40da-aaca-677fab2b140e 00:16:38.877 06:41:49 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6eb48a59-6d7e-40da-aaca-677fab2b140e 00:16:39.137 06:41:49 -- ftl/restore.sh@43 -- # split_bdev=c439f935-76f8-4203-abd5-083bc2437f6d 00:16:39.137 06:41:49 -- ftl/restore.sh@44 -- # '[' -n 0000:00:06.0 ']' 00:16:39.137 06:41:49 -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:06.0 c439f935-76f8-4203-abd5-083bc2437f6d 00:16:39.137 06:41:49 -- ftl/common.sh@35 -- # local name=nvc0 00:16:39.137 06:41:49 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:16:39.137 06:41:49 -- ftl/common.sh@37 -- # local base_bdev=c439f935-76f8-4203-abd5-083bc2437f6d 00:16:39.137 06:41:49 -- ftl/common.sh@38 -- # local cache_size= 00:16:39.137 06:41:49 -- ftl/common.sh@41 -- # get_bdev_size c439f935-76f8-4203-abd5-083bc2437f6d 00:16:39.137 06:41:49 -- common/autotest_common.sh@1367 -- # local bdev_name=c439f935-76f8-4203-abd5-083bc2437f6d 00:16:39.137 06:41:49 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:39.137 06:41:49 -- common/autotest_common.sh@1369 -- # local bs 00:16:39.137 06:41:49 -- common/autotest_common.sh@1370 -- # local nb 00:16:39.137 06:41:49 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c439f935-76f8-4203-abd5-083bc2437f6d 00:16:39.137 06:41:49 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:39.137 { 00:16:39.137 "name": "c439f935-76f8-4203-abd5-083bc2437f6d", 00:16:39.137 "aliases": [ 00:16:39.137 "lvs/nvme0n1p0" 00:16:39.137 ], 00:16:39.137 "product_name": "Logical Volume", 00:16:39.137 "block_size": 4096, 00:16:39.137 "num_blocks": 26476544, 00:16:39.137 "uuid": "c439f935-76f8-4203-abd5-083bc2437f6d", 00:16:39.137 "assigned_rate_limits": { 00:16:39.137 "rw_ios_per_sec": 0, 00:16:39.137 "rw_mbytes_per_sec": 0, 00:16:39.137 "r_mbytes_per_sec": 0, 00:16:39.137 "w_mbytes_per_sec": 0 00:16:39.137 }, 00:16:39.137 "claimed": false, 00:16:39.137 "zoned": false, 00:16:39.137 "supported_io_types": { 00:16:39.137 "read": true, 00:16:39.137 "write": true, 00:16:39.137 "unmap": true, 00:16:39.137 "write_zeroes": true, 00:16:39.137 "flush": false, 00:16:39.137 "reset": true, 00:16:39.137 "compare": false, 00:16:39.137 "compare_and_write": false, 00:16:39.137 "abort": false, 00:16:39.137 "nvme_admin": false, 00:16:39.137 "nvme_io": false 00:16:39.137 }, 00:16:39.137 "driver_specific": { 00:16:39.137 "lvol": { 00:16:39.137 "lvol_store_uuid": "6eb48a59-6d7e-40da-aaca-677fab2b140e", 00:16:39.137 "base_bdev": "nvme0n1", 00:16:39.137 "thin_provision": true, 00:16:39.137 "snapshot": false, 00:16:39.137 "clone": false, 00:16:39.137 "esnap_clone": false 00:16:39.137 } 00:16:39.137 } 00:16:39.137 } 00:16:39.137 ]' 00:16:39.137 06:41:49 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:39.396 06:41:49 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:39.396 06:41:49 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:39.396 06:41:49 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:39.396 06:41:49 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:39.396 06:41:49 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:39.396 06:41:49 -- ftl/common.sh@41 -- # local base_size=5171 00:16:39.396 06:41:49 -- ftl/common.sh@44 -- # local nvc_bdev 00:16:39.396 06:41:49 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:16:39.656 06:41:50 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:39.656 06:41:50 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:39.656 06:41:50 -- ftl/common.sh@48 -- # get_bdev_size c439f935-76f8-4203-abd5-083bc2437f6d 00:16:39.656 06:41:50 -- common/autotest_common.sh@1367 -- # local bdev_name=c439f935-76f8-4203-abd5-083bc2437f6d 00:16:39.656 06:41:50 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:39.656 06:41:50 -- common/autotest_common.sh@1369 -- # local bs 00:16:39.656 06:41:50 -- common/autotest_common.sh@1370 -- # local nb 00:16:39.656 06:41:50 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c439f935-76f8-4203-abd5-083bc2437f6d 00:16:39.656 06:41:50 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:39.656 { 00:16:39.656 "name": "c439f935-76f8-4203-abd5-083bc2437f6d", 00:16:39.656 "aliases": [ 00:16:39.656 "lvs/nvme0n1p0" 00:16:39.656 ], 00:16:39.656 "product_name": "Logical Volume", 00:16:39.656 "block_size": 4096, 00:16:39.656 "num_blocks": 26476544, 00:16:39.656 "uuid": "c439f935-76f8-4203-abd5-083bc2437f6d", 00:16:39.656 "assigned_rate_limits": { 00:16:39.656 "rw_ios_per_sec": 0, 00:16:39.656 "rw_mbytes_per_sec": 0, 00:16:39.656 "r_mbytes_per_sec": 0, 00:16:39.656 "w_mbytes_per_sec": 0 00:16:39.656 }, 00:16:39.656 "claimed": false, 00:16:39.656 "zoned": false, 00:16:39.656 "supported_io_types": { 00:16:39.656 "read": true, 00:16:39.656 "write": true, 00:16:39.656 "unmap": true, 00:16:39.656 "write_zeroes": true, 00:16:39.656 "flush": false, 00:16:39.656 "reset": true, 00:16:39.656 "compare": false, 00:16:39.656 "compare_and_write": false, 00:16:39.656 "abort": false, 00:16:39.656 "nvme_admin": false, 00:16:39.656 "nvme_io": false 00:16:39.656 }, 00:16:39.656 "driver_specific": { 00:16:39.656 "lvol": { 00:16:39.656 "lvol_store_uuid": "6eb48a59-6d7e-40da-aaca-677fab2b140e", 00:16:39.656 "base_bdev": "nvme0n1", 00:16:39.656 "thin_provision": true, 00:16:39.656 "snapshot": false, 00:16:39.656 "clone": false, 00:16:39.656 "esnap_clone": false 00:16:39.656 } 00:16:39.656 } 00:16:39.656 } 00:16:39.656 ]' 00:16:39.656 06:41:50 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:39.656 06:41:50 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:39.656 06:41:50 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:39.915 06:41:50 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:39.915 06:41:50 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:39.915 06:41:50 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:39.915 06:41:50 -- ftl/common.sh@48 -- # cache_size=5171 00:16:39.915 06:41:50 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:39.915 06:41:50 -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:16:39.915 06:41:50 -- ftl/restore.sh@48 -- # get_bdev_size c439f935-76f8-4203-abd5-083bc2437f6d 00:16:39.915 06:41:50 -- common/autotest_common.sh@1367 -- # local bdev_name=c439f935-76f8-4203-abd5-083bc2437f6d 00:16:39.915 06:41:50 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:39.915 06:41:50 -- common/autotest_common.sh@1369 -- # local bs 00:16:39.915 06:41:50 -- common/autotest_common.sh@1370 -- # local nb 00:16:39.915 06:41:50 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c439f935-76f8-4203-abd5-083bc2437f6d 00:16:40.174 06:41:50 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:40.174 { 00:16:40.174 "name": "c439f935-76f8-4203-abd5-083bc2437f6d", 00:16:40.174 "aliases": [ 00:16:40.174 "lvs/nvme0n1p0" 00:16:40.174 ], 00:16:40.174 "product_name": "Logical Volume", 00:16:40.174 "block_size": 4096, 00:16:40.174 "num_blocks": 26476544, 00:16:40.174 "uuid": "c439f935-76f8-4203-abd5-083bc2437f6d", 00:16:40.174 "assigned_rate_limits": { 00:16:40.174 "rw_ios_per_sec": 0, 00:16:40.174 "rw_mbytes_per_sec": 0, 00:16:40.174 "r_mbytes_per_sec": 0, 00:16:40.174 "w_mbytes_per_sec": 0 00:16:40.174 }, 00:16:40.174 "claimed": false, 00:16:40.174 "zoned": false, 00:16:40.174 "supported_io_types": { 00:16:40.174 "read": true, 00:16:40.174 "write": true, 00:16:40.174 "unmap": true, 00:16:40.174 "write_zeroes": true, 00:16:40.174 "flush": false, 00:16:40.174 "reset": true, 00:16:40.174 "compare": false, 00:16:40.174 "compare_and_write": false, 00:16:40.174 "abort": false, 00:16:40.174 "nvme_admin": false, 00:16:40.174 "nvme_io": false 00:16:40.174 }, 00:16:40.174 "driver_specific": { 00:16:40.174 "lvol": { 00:16:40.174 "lvol_store_uuid": "6eb48a59-6d7e-40da-aaca-677fab2b140e", 00:16:40.174 "base_bdev": "nvme0n1", 00:16:40.174 "thin_provision": true, 00:16:40.174 "snapshot": false, 00:16:40.174 "clone": false, 00:16:40.174 "esnap_clone": false 00:16:40.174 } 00:16:40.174 } 00:16:40.174 } 00:16:40.174 ]' 00:16:40.174 06:41:50 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:40.174 06:41:50 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:40.174 06:41:50 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:40.174 06:41:50 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:40.174 06:41:50 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:40.174 06:41:50 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:40.174 06:41:50 -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:16:40.174 06:41:50 -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d c439f935-76f8-4203-abd5-083bc2437f6d --l2p_dram_limit 10' 00:16:40.174 06:41:50 -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:16:40.174 06:41:50 -- ftl/restore.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:16:40.174 06:41:50 -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:16:40.174 06:41:50 -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:16:40.174 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:16:40.174 06:41:50 -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c439f935-76f8-4203-abd5-083bc2437f6d --l2p_dram_limit 10 -c nvc0n1p0 00:16:40.434 [2024-11-28 06:41:51.075798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.434 [2024-11-28 06:41:51.075840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:40.434 [2024-11-28 06:41:51.075855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:40.434 [2024-11-28 06:41:51.075863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.434 [2024-11-28 06:41:51.075914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.434 [2024-11-28 06:41:51.075923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:40.434 [2024-11-28 06:41:51.075937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:16:40.434 [2024-11-28 06:41:51.075945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.434 [2024-11-28 06:41:51.075969] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:40.434 [2024-11-28 06:41:51.076232] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:40.434 [2024-11-28 06:41:51.076249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.434 [2024-11-28 06:41:51.076260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:40.434 [2024-11-28 06:41:51.076270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:16:40.434 [2024-11-28 06:41:51.076278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.434 [2024-11-28 06:41:51.076315] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 97e9e6bb-d5b7-4b0c-8aa2-bb5f5045248a 00:16:40.434 [2024-11-28 06:41:51.077417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.434 [2024-11-28 06:41:51.077446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:40.434 [2024-11-28 06:41:51.077456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:16:40.434 [2024-11-28 06:41:51.077465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.434 [2024-11-28 06:41:51.082553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.434 [2024-11-28 06:41:51.082594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:40.434 [2024-11-28 06:41:51.082604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.018 ms 00:16:40.434 [2024-11-28 06:41:51.082615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.434 [2024-11-28 06:41:51.082697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.434 [2024-11-28 06:41:51.082723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:40.434 [2024-11-28 06:41:51.082732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:16:40.434 [2024-11-28 06:41:51.082743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.434 [2024-11-28 06:41:51.082780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.434 [2024-11-28 06:41:51.082791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:40.434 [2024-11-28 06:41:51.082801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:40.434 [2024-11-28 06:41:51.082809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.434 [2024-11-28 06:41:51.082833] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:40.434 [2024-11-28 06:41:51.084254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.434 [2024-11-28 06:41:51.084283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:40.434 [2024-11-28 06:41:51.084295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.427 ms 00:16:40.434 [2024-11-28 06:41:51.084302] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.434 [2024-11-28 06:41:51.084340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.434 [2024-11-28 06:41:51.084348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:40.434 [2024-11-28 06:41:51.084362] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:40.434 [2024-11-28 06:41:51.084369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.434 [2024-11-28 06:41:51.084387] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:40.434 [2024-11-28 06:41:51.084507] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:40.434 [2024-11-28 06:41:51.084523] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:40.434 [2024-11-28 06:41:51.084535] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:40.434 [2024-11-28 06:41:51.084548] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:40.434 [2024-11-28 06:41:51.084557] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:40.434 [2024-11-28 06:41:51.084567] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:40.434 [2024-11-28 06:41:51.084574] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:40.434 [2024-11-28 06:41:51.084583] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:40.434 [2024-11-28 06:41:51.084593] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:40.434 [2024-11-28 06:41:51.084602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.435 [2024-11-28 06:41:51.084610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:40.435 [2024-11-28 06:41:51.084619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:16:40.435 [2024-11-28 06:41:51.084626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.435 [2024-11-28 06:41:51.084697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.435 [2024-11-28 06:41:51.084726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:40.435 [2024-11-28 06:41:51.084735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:16:40.435 [2024-11-28 06:41:51.084742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.435 [2024-11-28 06:41:51.084816] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:40.435 [2024-11-28 06:41:51.084826] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:40.435 [2024-11-28 06:41:51.084836] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:40.435 [2024-11-28 06:41:51.084844] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:40.435 [2024-11-28 06:41:51.084853] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:40.435 [2024-11-28 06:41:51.084860] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:40.435 [2024-11-28 06:41:51.084868] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:40.435 [2024-11-28 06:41:51.084875] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:40.435 [2024-11-28 06:41:51.084884] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:40.435 [2024-11-28 06:41:51.084891] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:40.435 [2024-11-28 06:41:51.084901] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:40.435 [2024-11-28 06:41:51.084909] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:40.435 [2024-11-28 06:41:51.084920] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:40.435 [2024-11-28 06:41:51.084928] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:40.435 [2024-11-28 06:41:51.084938] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:16:40.435 [2024-11-28 06:41:51.084945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:40.435 [2024-11-28 06:41:51.084954] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:40.435 [2024-11-28 06:41:51.084962] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:16:40.435 [2024-11-28 06:41:51.084972] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:40.435 [2024-11-28 06:41:51.084981] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:40.435 [2024-11-28 06:41:51.084990] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:16:40.435 [2024-11-28 06:41:51.084997] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:40.435 [2024-11-28 06:41:51.085006] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:40.435 [2024-11-28 06:41:51.085014] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:40.435 [2024-11-28 06:41:51.085023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:40.435 [2024-11-28 06:41:51.085030] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:40.435 [2024-11-28 06:41:51.085040] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:16:40.435 [2024-11-28 06:41:51.085047] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:40.435 [2024-11-28 06:41:51.085060] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:40.435 [2024-11-28 06:41:51.085067] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:40.435 [2024-11-28 06:41:51.085076] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:40.435 [2024-11-28 06:41:51.085083] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:40.435 [2024-11-28 06:41:51.085092] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:16:40.435 [2024-11-28 06:41:51.085099] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:40.435 [2024-11-28 06:41:51.085108] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:40.435 [2024-11-28 06:41:51.085119] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:40.435 [2024-11-28 06:41:51.085128] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:40.435 [2024-11-28 06:41:51.085135] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:40.435 [2024-11-28 06:41:51.085144] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:16:40.435 [2024-11-28 06:41:51.085151] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:40.435 [2024-11-28 06:41:51.085160] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:40.435 [2024-11-28 06:41:51.085170] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:40.435 [2024-11-28 06:41:51.085179] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:40.435 [2024-11-28 06:41:51.085188] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:40.435 [2024-11-28 06:41:51.085199] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:40.435 [2024-11-28 06:41:51.085207] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:40.435 [2024-11-28 06:41:51.085217] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:40.435 [2024-11-28 06:41:51.085230] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:40.435 [2024-11-28 06:41:51.085240] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:40.435 [2024-11-28 06:41:51.085247] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:40.435 [2024-11-28 06:41:51.085259] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:40.435 [2024-11-28 06:41:51.085270] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:40.435 [2024-11-28 06:41:51.085281] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:40.435 [2024-11-28 06:41:51.085288] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:16:40.435 [2024-11-28 06:41:51.085297] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:16:40.435 [2024-11-28 06:41:51.085304] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:16:40.435 [2024-11-28 06:41:51.085313] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:16:40.435 [2024-11-28 06:41:51.085320] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:16:40.435 [2024-11-28 06:41:51.085328] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:16:40.435 [2024-11-28 06:41:51.085335] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:16:40.435 [2024-11-28 06:41:51.085346] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:16:40.435 [2024-11-28 06:41:51.085354] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:16:40.435 [2024-11-28 06:41:51.085363] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:16:40.435 [2024-11-28 06:41:51.085370] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:16:40.435 [2024-11-28 06:41:51.085379] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:16:40.435 [2024-11-28 06:41:51.085386] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:40.435 [2024-11-28 06:41:51.085395] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:40.435 [2024-11-28 06:41:51.085404] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:40.435 [2024-11-28 06:41:51.085413] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:40.435 [2024-11-28 06:41:51.085420] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:40.436 [2024-11-28 06:41:51.085429] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:40.436 [2024-11-28 06:41:51.085436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.436 [2024-11-28 06:41:51.085445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:40.436 [2024-11-28 06:41:51.085452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.667 ms 00:16:40.436 [2024-11-28 06:41:51.085461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.436 [2024-11-28 06:41:51.091455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.436 [2024-11-28 06:41:51.091494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:40.436 [2024-11-28 06:41:51.091505] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.944 ms 00:16:40.436 [2024-11-28 06:41:51.091515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.436 [2024-11-28 06:41:51.091600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.436 [2024-11-28 06:41:51.091610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:40.436 [2024-11-28 06:41:51.091623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:16:40.436 [2024-11-28 06:41:51.091632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.436 [2024-11-28 06:41:51.100184] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.436 [2024-11-28 06:41:51.100217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:40.436 [2024-11-28 06:41:51.100230] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.512 ms 00:16:40.436 [2024-11-28 06:41:51.100242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.436 [2024-11-28 06:41:51.100268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.436 [2024-11-28 06:41:51.100277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:40.436 [2024-11-28 06:41:51.100286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:40.436 [2024-11-28 06:41:51.100295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.436 [2024-11-28 06:41:51.100638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.436 [2024-11-28 06:41:51.100670] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:40.436 [2024-11-28 06:41:51.100679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:16:40.436 [2024-11-28 06:41:51.100689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.436 [2024-11-28 06:41:51.100814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.436 [2024-11-28 06:41:51.100827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:40.436 [2024-11-28 06:41:51.100837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:16:40.436 [2024-11-28 06:41:51.100847] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.436 [2024-11-28 06:41:51.106068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.436 [2024-11-28 06:41:51.106198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:40.436 [2024-11-28 06:41:51.106214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.202 ms 00:16:40.436 [2024-11-28 06:41:51.106223] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.436 [2024-11-28 06:41:51.114391] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:16:40.436 [2024-11-28 06:41:51.117012] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.436 [2024-11-28 06:41:51.117135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:40.436 [2024-11-28 06:41:51.117153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.721 ms 00:16:40.436 [2024-11-28 06:41:51.117160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.436 [2024-11-28 06:41:51.186775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.436 [2024-11-28 06:41:51.186823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:40.436 [2024-11-28 06:41:51.186837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.581 ms 00:16:40.436 [2024-11-28 06:41:51.186845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.436 [2024-11-28 06:41:51.186883] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:16:40.436 [2024-11-28 06:41:51.186894] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:16:42.969 [2024-11-28 06:41:53.668263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.969 [2024-11-28 06:41:53.668321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:42.969 [2024-11-28 06:41:53.668338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2481.364 ms 00:16:42.969 [2024-11-28 06:41:53.668347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.969 [2024-11-28 06:41:53.668547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.969 [2024-11-28 06:41:53.668559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:42.969 [2024-11-28 06:41:53.668569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:16:42.969 [2024-11-28 06:41:53.668577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.969 [2024-11-28 06:41:53.672300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.969 [2024-11-28 06:41:53.672337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:42.969 [2024-11-28 06:41:53.672351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.700 ms 00:16:42.969 [2024-11-28 06:41:53.672360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.969 [2024-11-28 06:41:53.675626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.969 [2024-11-28 06:41:53.675667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:42.969 [2024-11-28 06:41:53.675681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.201 ms 00:16:42.969 [2024-11-28 06:41:53.675689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.969 [2024-11-28 06:41:53.675888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.969 [2024-11-28 06:41:53.675902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:42.969 [2024-11-28 06:41:53.675912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:16:42.969 [2024-11-28 06:41:53.675923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.969 [2024-11-28 06:41:53.697209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.969 [2024-11-28 06:41:53.697245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:42.969 [2024-11-28 06:41:53.697258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.262 ms 00:16:42.969 [2024-11-28 06:41:53.697266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.969 [2024-11-28 06:41:53.702203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.969 [2024-11-28 06:41:53.702336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:42.969 [2024-11-28 06:41:53.702358] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.913 ms 00:16:42.969 [2024-11-28 06:41:53.702368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.969 [2024-11-28 06:41:53.703592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.969 [2024-11-28 06:41:53.703623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:42.969 [2024-11-28 06:41:53.703634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.201 ms 00:16:42.969 [2024-11-28 06:41:53.703642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.969 [2024-11-28 06:41:53.707183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.969 [2024-11-28 06:41:53.707216] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:42.969 [2024-11-28 06:41:53.707228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.517 ms 00:16:42.969 [2024-11-28 06:41:53.707235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.969 [2024-11-28 06:41:53.707261] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.969 [2024-11-28 06:41:53.707271] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:42.969 [2024-11-28 06:41:53.707283] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:42.969 [2024-11-28 06:41:53.707290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.969 [2024-11-28 06:41:53.707358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.969 [2024-11-28 06:41:53.707367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:42.969 [2024-11-28 06:41:53.707382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:16:42.969 [2024-11-28 06:41:53.707390] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.969 [2024-11-28 06:41:53.708266] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2632.062 ms, result 0 00:16:42.969 { 00:16:42.969 "name": "ftl0", 00:16:42.969 "uuid": "97e9e6bb-d5b7-4b0c-8aa2-bb5f5045248a" 00:16:42.969 } 00:16:42.969 06:41:53 -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:16:42.969 06:41:53 -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:43.228 06:41:53 -- ftl/restore.sh@63 -- # echo ']}' 00:16:43.228 06:41:53 -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:43.488 [2024-11-28 06:41:54.116436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.488 [2024-11-28 06:41:54.116478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:43.488 [2024-11-28 06:41:54.116491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:43.488 [2024-11-28 06:41:54.116501] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.488 [2024-11-28 06:41:54.116527] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:43.488 [2024-11-28 06:41:54.116999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.488 [2024-11-28 06:41:54.117021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:43.488 [2024-11-28 06:41:54.117032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:16:43.488 [2024-11-28 06:41:54.117040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.488 [2024-11-28 06:41:54.117304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.488 [2024-11-28 06:41:54.117383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:43.488 [2024-11-28 06:41:54.117396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:16:43.488 [2024-11-28 06:41:54.117405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.488 [2024-11-28 06:41:54.120659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.488 [2024-11-28 06:41:54.120774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:43.488 [2024-11-28 06:41:54.120794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.233 ms 00:16:43.488 [2024-11-28 06:41:54.120802] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.488 [2024-11-28 06:41:54.126913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.488 [2024-11-28 06:41:54.126941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:43.488 [2024-11-28 06:41:54.126952] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.086 ms 00:16:43.488 [2024-11-28 06:41:54.126959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.488 [2024-11-28 06:41:54.128610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.488 [2024-11-28 06:41:54.128757] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:43.488 [2024-11-28 06:41:54.128780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.582 ms 00:16:43.488 [2024-11-28 06:41:54.128788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.488 [2024-11-28 06:41:54.133499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.488 [2024-11-28 06:41:54.133535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:43.488 [2024-11-28 06:41:54.133547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.631 ms 00:16:43.488 [2024-11-28 06:41:54.133555] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.488 [2024-11-28 06:41:54.133678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.488 [2024-11-28 06:41:54.133688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:43.488 [2024-11-28 06:41:54.133699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:16:43.488 [2024-11-28 06:41:54.133729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.488 [2024-11-28 06:41:54.135833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.488 [2024-11-28 06:41:54.135953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:43.488 [2024-11-28 06:41:54.135970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.074 ms 00:16:43.488 [2024-11-28 06:41:54.135978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.488 [2024-11-28 06:41:54.137503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.488 [2024-11-28 06:41:54.137535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:43.488 [2024-11-28 06:41:54.137556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.489 ms 00:16:43.488 [2024-11-28 06:41:54.137564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.488 [2024-11-28 06:41:54.138598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.488 [2024-11-28 06:41:54.138630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:43.488 [2024-11-28 06:41:54.138641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:16:43.488 [2024-11-28 06:41:54.138648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.488 [2024-11-28 06:41:54.139902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.488 [2024-11-28 06:41:54.139931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:43.488 [2024-11-28 06:41:54.139942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.195 ms 00:16:43.489 [2024-11-28 06:41:54.139948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.489 [2024-11-28 06:41:54.139982] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:43.489 [2024-11-28 06:41:54.139996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:43.489 [2024-11-28 06:41:54.140782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:43.490 [2024-11-28 06:41:54.140789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:43.490 [2024-11-28 06:41:54.140797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:43.490 [2024-11-28 06:41:54.140805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:43.490 [2024-11-28 06:41:54.140815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:43.490 [2024-11-28 06:41:54.140823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:43.490 [2024-11-28 06:41:54.140831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:43.490 [2024-11-28 06:41:54.140838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:43.490 [2024-11-28 06:41:54.140847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:43.490 [2024-11-28 06:41:54.140854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:43.490 [2024-11-28 06:41:54.140864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:43.490 [2024-11-28 06:41:54.140871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:43.490 [2024-11-28 06:41:54.140881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:43.490 [2024-11-28 06:41:54.140897] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:43.490 [2024-11-28 06:41:54.140906] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97e9e6bb-d5b7-4b0c-8aa2-bb5f5045248a 00:16:43.490 [2024-11-28 06:41:54.140914] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:43.490 [2024-11-28 06:41:54.140922] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:43.490 [2024-11-28 06:41:54.140930] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:43.490 [2024-11-28 06:41:54.140939] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:43.490 [2024-11-28 06:41:54.140945] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:43.490 [2024-11-28 06:41:54.140957] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:43.490 [2024-11-28 06:41:54.140964] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:43.490 [2024-11-28 06:41:54.140971] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:43.490 [2024-11-28 06:41:54.140977] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:43.490 [2024-11-28 06:41:54.140987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.490 [2024-11-28 06:41:54.140995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:43.490 [2024-11-28 06:41:54.141005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.006 ms 00:16:43.490 [2024-11-28 06:41:54.141013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.490 [2024-11-28 06:41:54.142437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.490 [2024-11-28 06:41:54.142454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:43.490 [2024-11-28 06:41:54.142465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.405 ms 00:16:43.490 [2024-11-28 06:41:54.142472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.490 [2024-11-28 06:41:54.142529] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.490 [2024-11-28 06:41:54.142537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:43.490 [2024-11-28 06:41:54.142549] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:43.490 [2024-11-28 06:41:54.142555] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.490 [2024-11-28 06:41:54.147917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.490 [2024-11-28 06:41:54.148020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:43.490 [2024-11-28 06:41:54.148070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.490 [2024-11-28 06:41:54.148093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.490 [2024-11-28 06:41:54.148161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.490 [2024-11-28 06:41:54.148190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:43.490 [2024-11-28 06:41:54.148215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.490 [2024-11-28 06:41:54.148235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.490 [2024-11-28 06:41:54.148313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.490 [2024-11-28 06:41:54.148345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:43.490 [2024-11-28 06:41:54.148374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.490 [2024-11-28 06:41:54.148471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.490 [2024-11-28 06:41:54.148508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.490 [2024-11-28 06:41:54.148530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:43.490 [2024-11-28 06:41:54.148624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.490 [2024-11-28 06:41:54.148651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.490 [2024-11-28 06:41:54.157554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.490 [2024-11-28 06:41:54.157692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:43.490 [2024-11-28 06:41:54.157791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.490 [2024-11-28 06:41:54.157819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.490 [2024-11-28 06:41:54.161401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.490 [2024-11-28 06:41:54.161509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:43.490 [2024-11-28 06:41:54.161560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.490 [2024-11-28 06:41:54.161586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.490 [2024-11-28 06:41:54.161642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.490 [2024-11-28 06:41:54.161667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:43.490 [2024-11-28 06:41:54.161694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.490 [2024-11-28 06:41:54.161743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.490 [2024-11-28 06:41:54.161808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.490 [2024-11-28 06:41:54.161837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:43.490 [2024-11-28 06:41:54.161861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.490 [2024-11-28 06:41:54.161912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.490 [2024-11-28 06:41:54.162006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.490 [2024-11-28 06:41:54.162079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:43.490 [2024-11-28 06:41:54.162136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.490 [2024-11-28 06:41:54.162159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.490 [2024-11-28 06:41:54.162222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.490 [2024-11-28 06:41:54.162248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:43.490 [2024-11-28 06:41:54.162276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.490 [2024-11-28 06:41:54.162295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.490 [2024-11-28 06:41:54.162346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.490 [2024-11-28 06:41:54.162368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:43.490 [2024-11-28 06:41:54.162389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.490 [2024-11-28 06:41:54.162408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.490 [2024-11-28 06:41:54.162496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.490 [2024-11-28 06:41:54.162523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:43.490 [2024-11-28 06:41:54.162546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.490 [2024-11-28 06:41:54.162564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.490 [2024-11-28 06:41:54.163005] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 46.544 ms, result 0 00:16:43.490 true 00:16:43.490 06:41:54 -- ftl/restore.sh@66 -- # killprocess 83499 00:16:43.490 06:41:54 -- common/autotest_common.sh@936 -- # '[' -z 83499 ']' 00:16:43.490 06:41:54 -- common/autotest_common.sh@940 -- # kill -0 83499 00:16:43.490 06:41:54 -- common/autotest_common.sh@941 -- # uname 00:16:43.490 06:41:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:43.490 06:41:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83499 00:16:43.490 06:41:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:43.490 06:41:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:43.490 06:41:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83499' 00:16:43.490 killing process with pid 83499 00:16:43.490 06:41:54 -- common/autotest_common.sh@955 -- # kill 83499 00:16:43.490 06:41:54 -- common/autotest_common.sh@960 -- # wait 83499 00:16:48.757 06:41:58 -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:16:52.958 262144+0 records in 00:16:52.958 262144+0 records out 00:16:52.958 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.98987 s, 269 MB/s 00:16:52.958 06:42:02 -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:16:54.333 06:42:04 -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:54.333 [2024-11-28 06:42:04.918354] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:54.333 [2024-11-28 06:42:04.918457] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83713 ] 00:16:54.333 [2024-11-28 06:42:05.053604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.333 [2024-11-28 06:42:05.084304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.592 [2024-11-28 06:42:05.168274] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:54.592 [2024-11-28 06:42:05.168337] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:54.592 [2024-11-28 06:42:05.316930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.592 [2024-11-28 06:42:05.316971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:54.592 [2024-11-28 06:42:05.316984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:54.592 [2024-11-28 06:42:05.316992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.592 [2024-11-28 06:42:05.317047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.592 [2024-11-28 06:42:05.317060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:54.592 [2024-11-28 06:42:05.317068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:16:54.592 [2024-11-28 06:42:05.317078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.592 [2024-11-28 06:42:05.317094] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:54.592 [2024-11-28 06:42:05.317323] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:54.592 [2024-11-28 06:42:05.317337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.592 [2024-11-28 06:42:05.317348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:54.592 [2024-11-28 06:42:05.317356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:16:54.592 [2024-11-28 06:42:05.317363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.592 [2024-11-28 06:42:05.318438] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:54.592 [2024-11-28 06:42:05.320607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.592 [2024-11-28 06:42:05.320641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:54.592 [2024-11-28 06:42:05.320651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.174 ms 00:16:54.592 [2024-11-28 06:42:05.320662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.592 [2024-11-28 06:42:05.320725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.592 [2024-11-28 06:42:05.320736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:54.592 [2024-11-28 06:42:05.320744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:16:54.592 [2024-11-28 06:42:05.320755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.592 [2024-11-28 06:42:05.325432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.592 [2024-11-28 06:42:05.325462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:54.592 [2024-11-28 06:42:05.325472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.628 ms 00:16:54.592 [2024-11-28 06:42:05.325479] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.592 [2024-11-28 06:42:05.325549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.592 [2024-11-28 06:42:05.325559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:54.592 [2024-11-28 06:42:05.325567] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:16:54.592 [2024-11-28 06:42:05.325574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.592 [2024-11-28 06:42:05.325612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.592 [2024-11-28 06:42:05.325621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:54.592 [2024-11-28 06:42:05.325628] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:54.592 [2024-11-28 06:42:05.325638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.592 [2024-11-28 06:42:05.325664] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:54.592 [2024-11-28 06:42:05.326999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.592 [2024-11-28 06:42:05.327026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:54.592 [2024-11-28 06:42:05.327035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.341 ms 00:16:54.592 [2024-11-28 06:42:05.327042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.592 [2024-11-28 06:42:05.327073] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.592 [2024-11-28 06:42:05.327081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:54.592 [2024-11-28 06:42:05.327091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:54.592 [2024-11-28 06:42:05.327099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.592 [2024-11-28 06:42:05.327122] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:54.592 [2024-11-28 06:42:05.327140] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:54.592 [2024-11-28 06:42:05.327176] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:54.592 [2024-11-28 06:42:05.327190] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:54.592 [2024-11-28 06:42:05.327262] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:54.592 [2024-11-28 06:42:05.327275] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:54.592 [2024-11-28 06:42:05.327285] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:54.592 [2024-11-28 06:42:05.327300] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:54.592 [2024-11-28 06:42:05.327311] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:54.592 [2024-11-28 06:42:05.327319] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:54.592 [2024-11-28 06:42:05.327327] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:54.592 [2024-11-28 06:42:05.327334] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:54.592 [2024-11-28 06:42:05.327341] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:54.592 [2024-11-28 06:42:05.327347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.592 [2024-11-28 06:42:05.327354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:54.592 [2024-11-28 06:42:05.327362] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:16:54.592 [2024-11-28 06:42:05.327371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.592 [2024-11-28 06:42:05.327430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.592 [2024-11-28 06:42:05.327441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:54.592 [2024-11-28 06:42:05.327448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:16:54.592 [2024-11-28 06:42:05.327455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.592 [2024-11-28 06:42:05.327531] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:54.592 [2024-11-28 06:42:05.327542] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:54.592 [2024-11-28 06:42:05.327549] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:54.592 [2024-11-28 06:42:05.327559] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:54.592 [2024-11-28 06:42:05.327569] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:54.592 [2024-11-28 06:42:05.327575] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:54.592 [2024-11-28 06:42:05.327582] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:54.592 [2024-11-28 06:42:05.327589] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:54.592 [2024-11-28 06:42:05.327597] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:54.592 [2024-11-28 06:42:05.327604] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:54.592 [2024-11-28 06:42:05.327610] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:54.592 [2024-11-28 06:42:05.327617] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:54.592 [2024-11-28 06:42:05.327623] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:54.592 [2024-11-28 06:42:05.327629] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:54.593 [2024-11-28 06:42:05.327635] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:16:54.593 [2024-11-28 06:42:05.327643] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:54.593 [2024-11-28 06:42:05.327651] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:54.593 [2024-11-28 06:42:05.327658] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:16:54.593 [2024-11-28 06:42:05.327666] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:54.593 [2024-11-28 06:42:05.327675] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:54.593 [2024-11-28 06:42:05.327683] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:16:54.593 [2024-11-28 06:42:05.327691] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:54.593 [2024-11-28 06:42:05.327699] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:54.593 [2024-11-28 06:42:05.327724] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:54.593 [2024-11-28 06:42:05.327731] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:54.593 [2024-11-28 06:42:05.327738] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:54.593 [2024-11-28 06:42:05.327748] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:16:54.593 [2024-11-28 06:42:05.327755] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:54.593 [2024-11-28 06:42:05.327763] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:54.593 [2024-11-28 06:42:05.327770] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:54.593 [2024-11-28 06:42:05.327777] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:54.593 [2024-11-28 06:42:05.327785] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:54.593 [2024-11-28 06:42:05.327793] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:16:54.593 [2024-11-28 06:42:05.327799] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:54.593 [2024-11-28 06:42:05.327807] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:54.593 [2024-11-28 06:42:05.327817] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:54.593 [2024-11-28 06:42:05.327825] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:54.593 [2024-11-28 06:42:05.327834] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:54.593 [2024-11-28 06:42:05.327861] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:16:54.593 [2024-11-28 06:42:05.327868] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:54.593 [2024-11-28 06:42:05.327875] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:54.593 [2024-11-28 06:42:05.327884] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:54.593 [2024-11-28 06:42:05.327892] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:54.593 [2024-11-28 06:42:05.327900] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:54.593 [2024-11-28 06:42:05.327908] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:54.593 [2024-11-28 06:42:05.327917] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:54.593 [2024-11-28 06:42:05.327924] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:54.593 [2024-11-28 06:42:05.327932] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:54.593 [2024-11-28 06:42:05.327939] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:54.593 [2024-11-28 06:42:05.327946] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:54.593 [2024-11-28 06:42:05.327955] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:54.593 [2024-11-28 06:42:05.327971] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:54.593 [2024-11-28 06:42:05.327980] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:54.593 [2024-11-28 06:42:05.327988] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:16:54.593 [2024-11-28 06:42:05.327997] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:16:54.593 [2024-11-28 06:42:05.328006] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:16:54.593 [2024-11-28 06:42:05.328014] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:16:54.593 [2024-11-28 06:42:05.328028] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:16:54.593 [2024-11-28 06:42:05.328037] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:16:54.593 [2024-11-28 06:42:05.328046] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:16:54.593 [2024-11-28 06:42:05.328053] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:16:54.593 [2024-11-28 06:42:05.328061] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:16:54.593 [2024-11-28 06:42:05.328068] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:16:54.593 [2024-11-28 06:42:05.328075] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:16:54.593 [2024-11-28 06:42:05.328083] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:16:54.593 [2024-11-28 06:42:05.328090] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:54.593 [2024-11-28 06:42:05.328097] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:54.593 [2024-11-28 06:42:05.328107] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:54.593 [2024-11-28 06:42:05.328114] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:54.593 [2024-11-28 06:42:05.328122] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:54.593 [2024-11-28 06:42:05.328129] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:54.593 [2024-11-28 06:42:05.328136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.593 [2024-11-28 06:42:05.328146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:54.593 [2024-11-28 06:42:05.328157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.647 ms 00:16:54.593 [2024-11-28 06:42:05.328169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.593 [2024-11-28 06:42:05.334040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.593 [2024-11-28 06:42:05.334170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:54.593 [2024-11-28 06:42:05.334233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.835 ms 00:16:54.593 [2024-11-28 06:42:05.334282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.593 [2024-11-28 06:42:05.334380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.593 [2024-11-28 06:42:05.334637] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:54.593 [2024-11-28 06:42:05.334746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:16:54.593 [2024-11-28 06:42:05.334800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.593 [2024-11-28 06:42:05.354867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.593 [2024-11-28 06:42:05.355015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:54.593 [2024-11-28 06:42:05.355091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.996 ms 00:16:54.593 [2024-11-28 06:42:05.355121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.593 [2024-11-28 06:42:05.355279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.593 [2024-11-28 06:42:05.355360] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:54.593 [2024-11-28 06:42:05.355414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:54.593 [2024-11-28 06:42:05.355456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.593 [2024-11-28 06:42:05.355879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.593 [2024-11-28 06:42:05.355973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:54.593 [2024-11-28 06:42:05.356027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:16:54.593 [2024-11-28 06:42:05.356052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.593 [2024-11-28 06:42:05.356199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.593 [2024-11-28 06:42:05.356263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:54.593 [2024-11-28 06:42:05.356313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:16:54.593 [2024-11-28 06:42:05.356338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.853 [2024-11-28 06:42:05.361987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.853 [2024-11-28 06:42:05.362093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:54.853 [2024-11-28 06:42:05.362190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.609 ms 00:16:54.853 [2024-11-28 06:42:05.362237] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.853 [2024-11-28 06:42:05.364807] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:54.853 [2024-11-28 06:42:05.364933] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:54.853 [2024-11-28 06:42:05.365012] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.853 [2024-11-28 06:42:05.365037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:54.853 [2024-11-28 06:42:05.365060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.670 ms 00:16:54.853 [2024-11-28 06:42:05.365081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.853 [2024-11-28 06:42:05.379813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.853 [2024-11-28 06:42:05.379913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:54.853 [2024-11-28 06:42:05.379965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.684 ms 00:16:54.853 [2024-11-28 06:42:05.379986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.853 [2024-11-28 06:42:05.382117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.853 [2024-11-28 06:42:05.382188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:54.853 [2024-11-28 06:42:05.382215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.851 ms 00:16:54.853 [2024-11-28 06:42:05.382235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.853 [2024-11-28 06:42:05.383966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.853 [2024-11-28 06:42:05.384072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:54.853 [2024-11-28 06:42:05.384125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.642 ms 00:16:54.853 [2024-11-28 06:42:05.384152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.853 [2024-11-28 06:42:05.384649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.853 [2024-11-28 06:42:05.384764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:54.853 [2024-11-28 06:42:05.384821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:16:54.853 [2024-11-28 06:42:05.384864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.853 [2024-11-28 06:42:05.402459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.853 [2024-11-28 06:42:05.402592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:54.853 [2024-11-28 06:42:05.402645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.562 ms 00:16:54.853 [2024-11-28 06:42:05.402667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.853 [2024-11-28 06:42:05.410389] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:16:54.853 [2024-11-28 06:42:05.412783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.853 [2024-11-28 06:42:05.412884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:54.853 [2024-11-28 06:42:05.412943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.727 ms 00:16:54.853 [2024-11-28 06:42:05.412965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.853 [2024-11-28 06:42:05.413043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.853 [2024-11-28 06:42:05.413070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:54.853 [2024-11-28 06:42:05.413090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:54.853 [2024-11-28 06:42:05.413110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.853 [2024-11-28 06:42:05.413177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.853 [2024-11-28 06:42:05.413206] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:54.853 [2024-11-28 06:42:05.413232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:16:54.853 [2024-11-28 06:42:05.413288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.853 [2024-11-28 06:42:05.414442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.853 [2024-11-28 06:42:05.414539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:54.853 [2024-11-28 06:42:05.414586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.113 ms 00:16:54.853 [2024-11-28 06:42:05.414613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.853 [2024-11-28 06:42:05.414653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.853 [2024-11-28 06:42:05.414678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:54.853 [2024-11-28 06:42:05.414698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:54.853 [2024-11-28 06:42:05.414729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.853 [2024-11-28 06:42:05.414776] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:54.853 [2024-11-28 06:42:05.414835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.853 [2024-11-28 06:42:05.414865] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:54.853 [2024-11-28 06:42:05.414884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:16:54.853 [2024-11-28 06:42:05.414904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.853 [2024-11-28 06:42:05.418885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.853 [2024-11-28 06:42:05.418988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:54.853 [2024-11-28 06:42:05.419035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.946 ms 00:16:54.853 [2024-11-28 06:42:05.419057] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.853 [2024-11-28 06:42:05.419499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.853 [2024-11-28 06:42:05.419570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:54.853 [2024-11-28 06:42:05.419719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:16:54.853 [2024-11-28 06:42:05.419743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.853 [2024-11-28 06:42:05.420629] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 103.301 ms, result 0 00:16:55.789  [2024-11-28T06:42:07.495Z] Copying: 25/1024 [MB] (25 MBps) [2024-11-28T06:42:08.870Z] Copying: 59/1024 [MB] (34 MBps) [2024-11-28T06:42:09.436Z] Copying: 104/1024 [MB] (45 MBps) [2024-11-28T06:42:10.812Z] Copying: 149/1024 [MB] (44 MBps) [2024-11-28T06:42:11.748Z] Copying: 193/1024 [MB] (44 MBps) [2024-11-28T06:42:12.684Z] Copying: 238/1024 [MB] (44 MBps) [2024-11-28T06:42:13.620Z] Copying: 283/1024 [MB] (44 MBps) [2024-11-28T06:42:14.557Z] Copying: 328/1024 [MB] (45 MBps) [2024-11-28T06:42:15.552Z] Copying: 373/1024 [MB] (44 MBps) [2024-11-28T06:42:16.484Z] Copying: 415/1024 [MB] (42 MBps) [2024-11-28T06:42:17.860Z] Copying: 460/1024 [MB] (44 MBps) [2024-11-28T06:42:18.795Z] Copying: 505/1024 [MB] (45 MBps) [2024-11-28T06:42:19.730Z] Copying: 552/1024 [MB] (46 MBps) [2024-11-28T06:42:20.665Z] Copying: 594/1024 [MB] (42 MBps) [2024-11-28T06:42:21.602Z] Copying: 637/1024 [MB] (42 MBps) [2024-11-28T06:42:22.538Z] Copying: 682/1024 [MB] (44 MBps) [2024-11-28T06:42:23.474Z] Copying: 724/1024 [MB] (42 MBps) [2024-11-28T06:42:24.849Z] Copying: 768/1024 [MB] (43 MBps) [2024-11-28T06:42:25.785Z] Copying: 813/1024 [MB] (45 MBps) [2024-11-28T06:42:26.722Z] Copying: 855/1024 [MB] (42 MBps) [2024-11-28T06:42:27.657Z] Copying: 898/1024 [MB] (43 MBps) [2024-11-28T06:42:28.592Z] Copying: 943/1024 [MB] (44 MBps) [2024-11-28T06:42:29.531Z] Copying: 988/1024 [MB] (44 MBps) [2024-11-28T06:42:29.531Z] Copying: 1024/1024 [MB] (average 43 MBps)[2024-11-28 06:42:29.221681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.761 [2024-11-28 06:42:29.221823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:18.761 [2024-11-28 06:42:29.221893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:18.761 [2024-11-28 06:42:29.221995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.761 [2024-11-28 06:42:29.222037] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:18.761 [2024-11-28 06:42:29.222502] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.761 [2024-11-28 06:42:29.222616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:18.761 [2024-11-28 06:42:29.222678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:17:18.761 [2024-11-28 06:42:29.222701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.761 [2024-11-28 06:42:29.224065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.761 [2024-11-28 06:42:29.224165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:18.761 [2024-11-28 06:42:29.224222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.320 ms 00:17:18.761 [2024-11-28 06:42:29.224244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.761 [2024-11-28 06:42:29.236228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.761 [2024-11-28 06:42:29.236331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:18.761 [2024-11-28 06:42:29.236403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.957 ms 00:17:18.761 [2024-11-28 06:42:29.236426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.761 [2024-11-28 06:42:29.242524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.761 [2024-11-28 06:42:29.242631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:18.761 [2024-11-28 06:42:29.242735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.050 ms 00:17:18.761 [2024-11-28 06:42:29.242793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.761 [2024-11-28 06:42:29.244465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.761 [2024-11-28 06:42:29.244566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:18.761 [2024-11-28 06:42:29.244616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.599 ms 00:17:18.761 [2024-11-28 06:42:29.244636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.761 [2024-11-28 06:42:29.248266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.761 [2024-11-28 06:42:29.248370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:18.761 [2024-11-28 06:42:29.248454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.495 ms 00:17:18.761 [2024-11-28 06:42:29.248476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.761 [2024-11-28 06:42:29.248591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.761 [2024-11-28 06:42:29.248616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:18.761 [2024-11-28 06:42:29.248636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:17:18.761 [2024-11-28 06:42:29.248683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.761 [2024-11-28 06:42:29.250733] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.761 [2024-11-28 06:42:29.250827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:18.761 [2024-11-28 06:42:29.250875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.004 ms 00:17:18.761 [2024-11-28 06:42:29.250896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.761 [2024-11-28 06:42:29.252473] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.761 [2024-11-28 06:42:29.252568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:18.761 [2024-11-28 06:42:29.252617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.542 ms 00:17:18.761 [2024-11-28 06:42:29.252637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.761 [2024-11-28 06:42:29.253834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.761 [2024-11-28 06:42:29.253927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:18.761 [2024-11-28 06:42:29.253976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.160 ms 00:17:18.762 [2024-11-28 06:42:29.253996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.762 [2024-11-28 06:42:29.255167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.762 [2024-11-28 06:42:29.255319] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:18.762 [2024-11-28 06:42:29.255377] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.110 ms 00:17:18.762 [2024-11-28 06:42:29.255497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.762 [2024-11-28 06:42:29.255543] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:18.762 [2024-11-28 06:42:29.255599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.255634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.255690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.255733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.255795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.255825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.255876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.255908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.255935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.256995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.257003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.257010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.257016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.257023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.257030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.257037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.257045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.257053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.257059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.257067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.257073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.257080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.257088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.257095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.257102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.257109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.257116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.257123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:18.762 [2024-11-28 06:42:29.257131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:18.763 [2024-11-28 06:42:29.257138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:18.763 [2024-11-28 06:42:29.257145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:18.763 [2024-11-28 06:42:29.257151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:18.763 [2024-11-28 06:42:29.257158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:18.763 [2024-11-28 06:42:29.257165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:18.763 [2024-11-28 06:42:29.257172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:18.763 [2024-11-28 06:42:29.257180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:18.763 [2024-11-28 06:42:29.257187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:18.763 [2024-11-28 06:42:29.257194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:18.763 [2024-11-28 06:42:29.257201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:18.763 [2024-11-28 06:42:29.257208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:18.763 [2024-11-28 06:42:29.257216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:18.763 [2024-11-28 06:42:29.257223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:18.763 [2024-11-28 06:42:29.257230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:18.763 [2024-11-28 06:42:29.257237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:18.763 [2024-11-28 06:42:29.257252] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:18.763 [2024-11-28 06:42:29.257261] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97e9e6bb-d5b7-4b0c-8aa2-bb5f5045248a 00:17:18.763 [2024-11-28 06:42:29.257271] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:18.763 [2024-11-28 06:42:29.257278] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:18.763 [2024-11-28 06:42:29.257285] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:18.763 [2024-11-28 06:42:29.257292] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:18.763 [2024-11-28 06:42:29.257299] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:18.763 [2024-11-28 06:42:29.257307] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:18.763 [2024-11-28 06:42:29.257313] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:18.763 [2024-11-28 06:42:29.257319] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:18.763 [2024-11-28 06:42:29.257325] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:18.763 [2024-11-28 06:42:29.257333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.763 [2024-11-28 06:42:29.257340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:18.763 [2024-11-28 06:42:29.257348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.790 ms 00:17:18.763 [2024-11-28 06:42:29.257355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.763 [2024-11-28 06:42:29.258689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.763 [2024-11-28 06:42:29.258733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:18.763 [2024-11-28 06:42:29.258746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.314 ms 00:17:18.763 [2024-11-28 06:42:29.258754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.763 [2024-11-28 06:42:29.258803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.763 [2024-11-28 06:42:29.258812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:18.763 [2024-11-28 06:42:29.258819] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:18.763 [2024-11-28 06:42:29.258830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.763 [2024-11-28 06:42:29.263956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.763 [2024-11-28 06:42:29.264072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:18.763 [2024-11-28 06:42:29.264133] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.763 [2024-11-28 06:42:29.264204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.763 [2024-11-28 06:42:29.264275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.763 [2024-11-28 06:42:29.264332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:18.763 [2024-11-28 06:42:29.264360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.763 [2024-11-28 06:42:29.264479] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.763 [2024-11-28 06:42:29.264572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.763 [2024-11-28 06:42:29.264610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:18.763 [2024-11-28 06:42:29.264677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.763 [2024-11-28 06:42:29.264796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.763 [2024-11-28 06:42:29.264836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.763 [2024-11-28 06:42:29.264877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:18.763 [2024-11-28 06:42:29.264937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.763 [2024-11-28 06:42:29.264963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.763 [2024-11-28 06:42:29.273414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.763 [2024-11-28 06:42:29.273548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:18.763 [2024-11-28 06:42:29.273599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.763 [2024-11-28 06:42:29.273622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.763 [2024-11-28 06:42:29.277144] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.763 [2024-11-28 06:42:29.277250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:18.763 [2024-11-28 06:42:29.277299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.763 [2024-11-28 06:42:29.277320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.763 [2024-11-28 06:42:29.277372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.763 [2024-11-28 06:42:29.277408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:18.763 [2024-11-28 06:42:29.277458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.763 [2024-11-28 06:42:29.277480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.763 [2024-11-28 06:42:29.277522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.763 [2024-11-28 06:42:29.277531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:18.763 [2024-11-28 06:42:29.277545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.763 [2024-11-28 06:42:29.277553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.763 [2024-11-28 06:42:29.277621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.763 [2024-11-28 06:42:29.277631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:18.763 [2024-11-28 06:42:29.277639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.763 [2024-11-28 06:42:29.277646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.763 [2024-11-28 06:42:29.277675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.763 [2024-11-28 06:42:29.277684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:18.763 [2024-11-28 06:42:29.277691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.763 [2024-11-28 06:42:29.277699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.763 [2024-11-28 06:42:29.277765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.763 [2024-11-28 06:42:29.277777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:18.763 [2024-11-28 06:42:29.277785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.763 [2024-11-28 06:42:29.277793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.763 [2024-11-28 06:42:29.277832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.763 [2024-11-28 06:42:29.277854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:18.763 [2024-11-28 06:42:29.277861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.763 [2024-11-28 06:42:29.277870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.763 [2024-11-28 06:42:29.277985] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 56.271 ms, result 0 00:17:19.332 00:17:19.332 00:17:19.332 06:42:30 -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:17:19.332 [2024-11-28 06:42:30.059006] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:19.332 [2024-11-28 06:42:30.059292] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83984 ] 00:17:19.591 [2024-11-28 06:42:30.195935] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.591 [2024-11-28 06:42:30.225273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.591 [2024-11-28 06:42:30.306980] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:19.591 [2024-11-28 06:42:30.307242] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:19.851 [2024-11-28 06:42:30.452757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.851 [2024-11-28 06:42:30.452946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:19.851 [2024-11-28 06:42:30.452971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:19.851 [2024-11-28 06:42:30.452979] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.851 [2024-11-28 06:42:30.453040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.851 [2024-11-28 06:42:30.453051] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:19.851 [2024-11-28 06:42:30.453060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:19.851 [2024-11-28 06:42:30.453069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.851 [2024-11-28 06:42:30.453088] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:19.851 [2024-11-28 06:42:30.453310] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:19.851 [2024-11-28 06:42:30.453325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.851 [2024-11-28 06:42:30.453334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:19.851 [2024-11-28 06:42:30.453343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:17:19.851 [2024-11-28 06:42:30.453350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.851 [2024-11-28 06:42:30.454429] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:19.851 [2024-11-28 06:42:30.456500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.851 [2024-11-28 06:42:30.456672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:19.851 [2024-11-28 06:42:30.456688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.073 ms 00:17:19.851 [2024-11-28 06:42:30.456700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.851 [2024-11-28 06:42:30.456763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.851 [2024-11-28 06:42:30.456773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:19.851 [2024-11-28 06:42:30.456785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:19.851 [2024-11-28 06:42:30.456792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.851 [2024-11-28 06:42:30.461202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.851 [2024-11-28 06:42:30.461233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:19.851 [2024-11-28 06:42:30.461242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.353 ms 00:17:19.851 [2024-11-28 06:42:30.461250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.851 [2024-11-28 06:42:30.461322] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.851 [2024-11-28 06:42:30.461330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:19.852 [2024-11-28 06:42:30.461339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:19.852 [2024-11-28 06:42:30.461346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.852 [2024-11-28 06:42:30.461386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.852 [2024-11-28 06:42:30.461394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:19.852 [2024-11-28 06:42:30.461402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:19.852 [2024-11-28 06:42:30.461416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.852 [2024-11-28 06:42:30.461442] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:19.852 [2024-11-28 06:42:30.462699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.852 [2024-11-28 06:42:30.462740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:19.852 [2024-11-28 06:42:30.462749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.267 ms 00:17:19.852 [2024-11-28 06:42:30.462757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.852 [2024-11-28 06:42:30.462786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.852 [2024-11-28 06:42:30.462794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:19.852 [2024-11-28 06:42:30.462804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:19.852 [2024-11-28 06:42:30.462811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.852 [2024-11-28 06:42:30.462830] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:19.852 [2024-11-28 06:42:30.462848] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:19.852 [2024-11-28 06:42:30.462879] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:19.852 [2024-11-28 06:42:30.462895] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:19.852 [2024-11-28 06:42:30.462969] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:19.852 [2024-11-28 06:42:30.462982] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:19.852 [2024-11-28 06:42:30.462992] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:19.852 [2024-11-28 06:42:30.463008] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:19.852 [2024-11-28 06:42:30.463017] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:19.852 [2024-11-28 06:42:30.463026] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:19.852 [2024-11-28 06:42:30.463033] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:19.852 [2024-11-28 06:42:30.463040] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:19.852 [2024-11-28 06:42:30.463053] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:19.852 [2024-11-28 06:42:30.463061] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.852 [2024-11-28 06:42:30.463071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:19.852 [2024-11-28 06:42:30.463078] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:17:19.852 [2024-11-28 06:42:30.463089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.852 [2024-11-28 06:42:30.463149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.852 [2024-11-28 06:42:30.463158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:19.852 [2024-11-28 06:42:30.463165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:17:19.852 [2024-11-28 06:42:30.463171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.852 [2024-11-28 06:42:30.463244] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:19.852 [2024-11-28 06:42:30.463254] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:19.852 [2024-11-28 06:42:30.463262] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:19.852 [2024-11-28 06:42:30.463271] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.852 [2024-11-28 06:42:30.463280] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:19.852 [2024-11-28 06:42:30.463287] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:19.852 [2024-11-28 06:42:30.463294] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:19.852 [2024-11-28 06:42:30.463302] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:19.852 [2024-11-28 06:42:30.463309] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:19.852 [2024-11-28 06:42:30.463315] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:19.852 [2024-11-28 06:42:30.463322] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:19.852 [2024-11-28 06:42:30.463328] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:19.852 [2024-11-28 06:42:30.463335] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:19.852 [2024-11-28 06:42:30.463343] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:19.852 [2024-11-28 06:42:30.463352] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:19.852 [2024-11-28 06:42:30.463360] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.852 [2024-11-28 06:42:30.463368] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:19.852 [2024-11-28 06:42:30.463375] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:19.852 [2024-11-28 06:42:30.463382] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.852 [2024-11-28 06:42:30.463391] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:19.852 [2024-11-28 06:42:30.463399] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:19.852 [2024-11-28 06:42:30.463407] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:19.852 [2024-11-28 06:42:30.463415] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:19.852 [2024-11-28 06:42:30.463422] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:19.852 [2024-11-28 06:42:30.463430] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:19.852 [2024-11-28 06:42:30.463437] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:19.852 [2024-11-28 06:42:30.463444] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:19.852 [2024-11-28 06:42:30.463451] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:19.852 [2024-11-28 06:42:30.463459] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:19.852 [2024-11-28 06:42:30.463466] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:19.852 [2024-11-28 06:42:30.463473] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:19.852 [2024-11-28 06:42:30.463480] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:19.852 [2024-11-28 06:42:30.463487] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:19.852 [2024-11-28 06:42:30.463494] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:19.852 [2024-11-28 06:42:30.463503] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:19.852 [2024-11-28 06:42:30.463515] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:19.852 [2024-11-28 06:42:30.463522] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:19.852 [2024-11-28 06:42:30.463529] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:19.852 [2024-11-28 06:42:30.463536] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:19.852 [2024-11-28 06:42:30.463543] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:19.852 [2024-11-28 06:42:30.463550] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:19.852 [2024-11-28 06:42:30.463558] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:19.852 [2024-11-28 06:42:30.463571] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:19.852 [2024-11-28 06:42:30.463579] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.852 [2024-11-28 06:42:30.463587] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:19.852 [2024-11-28 06:42:30.463594] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:19.852 [2024-11-28 06:42:30.463601] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:19.852 [2024-11-28 06:42:30.463611] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:19.852 [2024-11-28 06:42:30.463618] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:19.852 [2024-11-28 06:42:30.463627] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:19.852 [2024-11-28 06:42:30.463635] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:19.852 [2024-11-28 06:42:30.463650] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:19.852 [2024-11-28 06:42:30.463666] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:19.852 [2024-11-28 06:42:30.463675] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:19.852 [2024-11-28 06:42:30.463683] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:19.852 [2024-11-28 06:42:30.463691] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:19.852 [2024-11-28 06:42:30.463699] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:19.852 [2024-11-28 06:42:30.463974] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:19.852 [2024-11-28 06:42:30.464007] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:19.852 [2024-11-28 06:42:30.464035] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:19.852 [2024-11-28 06:42:30.464106] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:19.852 [2024-11-28 06:42:30.464137] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:19.852 [2024-11-28 06:42:30.464166] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:19.852 [2024-11-28 06:42:30.464194] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:19.852 [2024-11-28 06:42:30.464301] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:19.853 [2024-11-28 06:42:30.464358] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:19.853 [2024-11-28 06:42:30.464399] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:19.853 [2024-11-28 06:42:30.464432] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:19.853 [2024-11-28 06:42:30.464462] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:19.853 [2024-11-28 06:42:30.464491] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:19.853 [2024-11-28 06:42:30.464544] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:19.853 [2024-11-28 06:42:30.464574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.853 [2024-11-28 06:42:30.464593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:19.853 [2024-11-28 06:42:30.464645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.373 ms 00:17:19.853 [2024-11-28 06:42:30.464676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.853 [2024-11-28 06:42:30.470301] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.853 [2024-11-28 06:42:30.470415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:19.853 [2024-11-28 06:42:30.470466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.556 ms 00:17:19.853 [2024-11-28 06:42:30.470489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.853 [2024-11-28 06:42:30.470585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.853 [2024-11-28 06:42:30.470658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:19.853 [2024-11-28 06:42:30.470774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:19.853 [2024-11-28 06:42:30.470798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.853 [2024-11-28 06:42:30.485830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.853 [2024-11-28 06:42:30.485964] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:19.853 [2024-11-28 06:42:30.486022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.984 ms 00:17:19.853 [2024-11-28 06:42:30.486052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.853 [2024-11-28 06:42:30.486103] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.853 [2024-11-28 06:42:30.486128] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:19.853 [2024-11-28 06:42:30.486149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:19.853 [2024-11-28 06:42:30.486172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.853 [2024-11-28 06:42:30.486546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.853 [2024-11-28 06:42:30.486639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:19.853 [2024-11-28 06:42:30.486687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:17:19.853 [2024-11-28 06:42:30.486736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.853 [2024-11-28 06:42:30.486863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.853 [2024-11-28 06:42:30.486890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:19.853 [2024-11-28 06:42:30.486956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:17:19.853 [2024-11-28 06:42:30.486978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.853 [2024-11-28 06:42:30.492066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.853 [2024-11-28 06:42:30.492175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:19.853 [2024-11-28 06:42:30.492229] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.053 ms 00:17:19.853 [2024-11-28 06:42:30.492261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.853 [2024-11-28 06:42:30.494562] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:19.853 [2024-11-28 06:42:30.494695] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:19.853 [2024-11-28 06:42:30.494799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.853 [2024-11-28 06:42:30.495025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:19.853 [2024-11-28 06:42:30.495038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.443 ms 00:17:19.853 [2024-11-28 06:42:30.495046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.853 [2024-11-28 06:42:30.509982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.853 [2024-11-28 06:42:30.510102] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:19.853 [2024-11-28 06:42:30.510160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.897 ms 00:17:19.853 [2024-11-28 06:42:30.510183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.853 [2024-11-28 06:42:30.511727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.853 [2024-11-28 06:42:30.511829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:19.853 [2024-11-28 06:42:30.511881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.493 ms 00:17:19.853 [2024-11-28 06:42:30.511891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.853 [2024-11-28 06:42:30.513306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.853 [2024-11-28 06:42:30.513337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:19.853 [2024-11-28 06:42:30.513346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.389 ms 00:17:19.853 [2024-11-28 06:42:30.513352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.853 [2024-11-28 06:42:30.513541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.853 [2024-11-28 06:42:30.513552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:19.853 [2024-11-28 06:42:30.513565] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:17:19.853 [2024-11-28 06:42:30.513577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.853 [2024-11-28 06:42:30.530898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.853 [2024-11-28 06:42:30.531041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:19.853 [2024-11-28 06:42:30.531061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.306 ms 00:17:19.853 [2024-11-28 06:42:30.531069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.853 [2024-11-28 06:42:30.538416] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:19.853 [2024-11-28 06:42:30.540837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.853 [2024-11-28 06:42:30.540869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:19.853 [2024-11-28 06:42:30.540880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.734 ms 00:17:19.853 [2024-11-28 06:42:30.540888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.853 [2024-11-28 06:42:30.540949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.853 [2024-11-28 06:42:30.540961] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:19.853 [2024-11-28 06:42:30.540970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:19.853 [2024-11-28 06:42:30.540980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.853 [2024-11-28 06:42:30.541028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.853 [2024-11-28 06:42:30.541040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:19.853 [2024-11-28 06:42:30.541051] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:19.853 [2024-11-28 06:42:30.541059] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.853 [2024-11-28 06:42:30.542272] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.853 [2024-11-28 06:42:30.542354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:19.853 [2024-11-28 06:42:30.542405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.194 ms 00:17:19.853 [2024-11-28 06:42:30.542426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.853 [2024-11-28 06:42:30.542466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.853 [2024-11-28 06:42:30.542546] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:19.853 [2024-11-28 06:42:30.542569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:19.853 [2024-11-28 06:42:30.542875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.853 [2024-11-28 06:42:30.542978] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:19.853 [2024-11-28 06:42:30.543086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.853 [2024-11-28 06:42:30.543112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:19.853 [2024-11-28 06:42:30.543133] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:17:19.853 [2024-11-28 06:42:30.543151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.853 [2024-11-28 06:42:30.546444] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.853 [2024-11-28 06:42:30.546556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:19.853 [2024-11-28 06:42:30.546610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.257 ms 00:17:19.853 [2024-11-28 06:42:30.546637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.853 [2024-11-28 06:42:30.546724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.853 [2024-11-28 06:42:30.546752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:19.853 [2024-11-28 06:42:30.546768] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:19.853 [2024-11-28 06:42:30.546779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.853 [2024-11-28 06:42:30.548843] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 95.050 ms, result 0 00:17:21.229  [2024-11-28T06:42:32.935Z] Copying: 42/1024 [MB] (42 MBps) [2024-11-28T06:42:33.872Z] Copying: 86/1024 [MB] (44 MBps) [2024-11-28T06:42:34.805Z] Copying: 132/1024 [MB] (45 MBps) [2024-11-28T06:42:35.739Z] Copying: 181/1024 [MB] (48 MBps) [2024-11-28T06:42:36.747Z] Copying: 228/1024 [MB] (47 MBps) [2024-11-28T06:42:38.133Z] Copying: 275/1024 [MB] (46 MBps) [2024-11-28T06:42:39.067Z] Copying: 318/1024 [MB] (42 MBps) [2024-11-28T06:42:39.999Z] Copying: 344/1024 [MB] (25 MBps) [2024-11-28T06:42:40.934Z] Copying: 370/1024 [MB] (26 MBps) [2024-11-28T06:42:41.868Z] Copying: 399/1024 [MB] (28 MBps) [2024-11-28T06:42:42.804Z] Copying: 421/1024 [MB] (22 MBps) [2024-11-28T06:42:43.739Z] Copying: 457/1024 [MB] (35 MBps) [2024-11-28T06:42:45.114Z] Copying: 492/1024 [MB] (35 MBps) [2024-11-28T06:42:46.047Z] Copying: 535/1024 [MB] (43 MBps) [2024-11-28T06:42:46.983Z] Copying: 575/1024 [MB] (40 MBps) [2024-11-28T06:42:47.918Z] Copying: 609/1024 [MB] (34 MBps) [2024-11-28T06:42:48.853Z] Copying: 650/1024 [MB] (40 MBps) [2024-11-28T06:42:49.788Z] Copying: 680/1024 [MB] (29 MBps) [2024-11-28T06:42:50.723Z] Copying: 702/1024 [MB] (21 MBps) [2024-11-28T06:42:52.097Z] Copying: 728/1024 [MB] (26 MBps) [2024-11-28T06:42:53.030Z] Copying: 752/1024 [MB] (23 MBps) [2024-11-28T06:42:53.966Z] Copying: 772/1024 [MB] (20 MBps) [2024-11-28T06:42:54.898Z] Copying: 790/1024 [MB] (17 MBps) [2024-11-28T06:42:55.830Z] Copying: 833/1024 [MB] (43 MBps) [2024-11-28T06:42:56.768Z] Copying: 853/1024 [MB] (19 MBps) [2024-11-28T06:42:58.139Z] Copying: 875/1024 [MB] (21 MBps) [2024-11-28T06:42:59.074Z] Copying: 896/1024 [MB] (21 MBps) [2024-11-28T06:43:00.007Z] Copying: 921/1024 [MB] (25 MBps) [2024-11-28T06:43:00.984Z] Copying: 954/1024 [MB] (32 MBps) [2024-11-28T06:43:01.920Z] Copying: 981/1024 [MB] (27 MBps) [2024-11-28T06:43:02.179Z] Copying: 1015/1024 [MB] (34 MBps) [2024-11-28T06:43:02.439Z] Copying: 1024/1024 [MB] (average 32 MBps)[2024-11-28 06:43:02.293848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.669 [2024-11-28 06:43:02.293905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:51.669 [2024-11-28 06:43:02.293919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:51.669 [2024-11-28 06:43:02.293928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.669 [2024-11-28 06:43:02.293949] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:51.669 [2024-11-28 06:43:02.294380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.669 [2024-11-28 06:43:02.294406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:51.669 [2024-11-28 06:43:02.294415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:17:51.669 [2024-11-28 06:43:02.294423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.669 [2024-11-28 06:43:02.294642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.669 [2024-11-28 06:43:02.294652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:51.669 [2024-11-28 06:43:02.294660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:17:51.669 [2024-11-28 06:43:02.294668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.669 [2024-11-28 06:43:02.298696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.669 [2024-11-28 06:43:02.298726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:51.669 [2024-11-28 06:43:02.298739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.014 ms 00:17:51.669 [2024-11-28 06:43:02.298746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.669 [2024-11-28 06:43:02.305379] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.669 [2024-11-28 06:43:02.305513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:51.669 [2024-11-28 06:43:02.305530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.615 ms 00:17:51.669 [2024-11-28 06:43:02.305539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.669 [2024-11-28 06:43:02.307922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.669 [2024-11-28 06:43:02.307950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:51.669 [2024-11-28 06:43:02.307959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.313 ms 00:17:51.669 [2024-11-28 06:43:02.307966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.669 [2024-11-28 06:43:02.311750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.669 [2024-11-28 06:43:02.311877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:51.669 [2024-11-28 06:43:02.311892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.754 ms 00:17:51.669 [2024-11-28 06:43:02.311900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.669 [2024-11-28 06:43:02.312009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.669 [2024-11-28 06:43:02.312020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:51.669 [2024-11-28 06:43:02.312028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:17:51.669 [2024-11-28 06:43:02.312036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.669 [2024-11-28 06:43:02.313819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.669 [2024-11-28 06:43:02.313848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:51.669 [2024-11-28 06:43:02.313856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.768 ms 00:17:51.669 [2024-11-28 06:43:02.313863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.669 [2024-11-28 06:43:02.315454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.669 [2024-11-28 06:43:02.315563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:51.669 [2024-11-28 06:43:02.315577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.563 ms 00:17:51.669 [2024-11-28 06:43:02.315584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.669 [2024-11-28 06:43:02.317491] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.669 [2024-11-28 06:43:02.317517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:51.669 [2024-11-28 06:43:02.317527] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.881 ms 00:17:51.669 [2024-11-28 06:43:02.317534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.669 [2024-11-28 06:43:02.318681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.669 [2024-11-28 06:43:02.318726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:51.669 [2024-11-28 06:43:02.318735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.094 ms 00:17:51.669 [2024-11-28 06:43:02.318743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.669 [2024-11-28 06:43:02.318770] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:51.669 [2024-11-28 06:43:02.318790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.318802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.318810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.318818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.318826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.318834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.318841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.318849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.318856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.318863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.318871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.318879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.318887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.318894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.318901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.318908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.318915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.318922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.318930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.318937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.318945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.318952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.318960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.318967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.318975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.318982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.318990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.318997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.319004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.319011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.319019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.319027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.319034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.319042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.319051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.319058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.319066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.319074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.319081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.319088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.319096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:51.669 [2024-11-28 06:43:02.319103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:51.670 [2024-11-28 06:43:02.319555] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:51.670 [2024-11-28 06:43:02.319562] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97e9e6bb-d5b7-4b0c-8aa2-bb5f5045248a 00:17:51.670 [2024-11-28 06:43:02.319571] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:51.670 [2024-11-28 06:43:02.319578] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:51.670 [2024-11-28 06:43:02.319586] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:51.670 [2024-11-28 06:43:02.319594] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:51.670 [2024-11-28 06:43:02.319600] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:51.670 [2024-11-28 06:43:02.319608] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:51.670 [2024-11-28 06:43:02.319615] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:51.670 [2024-11-28 06:43:02.319622] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:51.670 [2024-11-28 06:43:02.319627] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:51.670 [2024-11-28 06:43:02.319634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.670 [2024-11-28 06:43:02.319642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:51.670 [2024-11-28 06:43:02.319652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.865 ms 00:17:51.670 [2024-11-28 06:43:02.319662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.670 [2024-11-28 06:43:02.321322] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.670 [2024-11-28 06:43:02.321410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:51.670 [2024-11-28 06:43:02.321456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.645 ms 00:17:51.670 [2024-11-28 06:43:02.321478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.670 [2024-11-28 06:43:02.321544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.670 [2024-11-28 06:43:02.321567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:51.670 [2024-11-28 06:43:02.321591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:51.670 [2024-11-28 06:43:02.321614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.670 [2024-11-28 06:43:02.327249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.670 [2024-11-28 06:43:02.327348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:51.670 [2024-11-28 06:43:02.327395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.670 [2024-11-28 06:43:02.327419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.670 [2024-11-28 06:43:02.327482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.670 [2024-11-28 06:43:02.327516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:51.670 [2024-11-28 06:43:02.327541] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.670 [2024-11-28 06:43:02.327559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.670 [2024-11-28 06:43:02.327628] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.670 [2024-11-28 06:43:02.327660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:51.670 [2024-11-28 06:43:02.327682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.670 [2024-11-28 06:43:02.327753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.670 [2024-11-28 06:43:02.327787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.670 [2024-11-28 06:43:02.327835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:51.670 [2024-11-28 06:43:02.327860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.670 [2024-11-28 06:43:02.328175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.671 [2024-11-28 06:43:02.336506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.671 [2024-11-28 06:43:02.336544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:51.671 [2024-11-28 06:43:02.336556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.671 [2024-11-28 06:43:02.336564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.671 [2024-11-28 06:43:02.340117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.671 [2024-11-28 06:43:02.340155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:51.671 [2024-11-28 06:43:02.340165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.671 [2024-11-28 06:43:02.340175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.671 [2024-11-28 06:43:02.340211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.671 [2024-11-28 06:43:02.340221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:51.671 [2024-11-28 06:43:02.340229] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.671 [2024-11-28 06:43:02.340236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.671 [2024-11-28 06:43:02.340282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.671 [2024-11-28 06:43:02.340291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:51.671 [2024-11-28 06:43:02.340300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.671 [2024-11-28 06:43:02.340311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.671 [2024-11-28 06:43:02.340386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.671 [2024-11-28 06:43:02.340396] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:51.671 [2024-11-28 06:43:02.340403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.671 [2024-11-28 06:43:02.340412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.671 [2024-11-28 06:43:02.340439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.671 [2024-11-28 06:43:02.340451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:51.671 [2024-11-28 06:43:02.340463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.671 [2024-11-28 06:43:02.340470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.671 [2024-11-28 06:43:02.340507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.671 [2024-11-28 06:43:02.340517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:51.671 [2024-11-28 06:43:02.340525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.671 [2024-11-28 06:43:02.340532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.671 [2024-11-28 06:43:02.340573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.671 [2024-11-28 06:43:02.340587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:51.671 [2024-11-28 06:43:02.340594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.671 [2024-11-28 06:43:02.340605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.671 [2024-11-28 06:43:02.340916] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 46.851 ms, result 0 00:17:51.929 00:17:51.929 00:17:51.929 06:43:02 -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:54.467 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:17:54.467 06:43:04 -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:17:54.467 [2024-11-28 06:43:04.721794] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:54.467 [2024-11-28 06:43:04.722044] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84347 ] 00:17:54.467 [2024-11-28 06:43:04.848961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.467 [2024-11-28 06:43:04.879560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.467 [2024-11-28 06:43:04.963191] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:54.467 [2024-11-28 06:43:04.963258] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:54.467 [2024-11-28 06:43:05.112178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.467 [2024-11-28 06:43:05.112222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:54.467 [2024-11-28 06:43:05.112238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:54.467 [2024-11-28 06:43:05.112246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.467 [2024-11-28 06:43:05.112298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.467 [2024-11-28 06:43:05.112308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:54.467 [2024-11-28 06:43:05.112316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:54.467 [2024-11-28 06:43:05.112326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.467 [2024-11-28 06:43:05.112342] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:54.467 [2024-11-28 06:43:05.112593] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:54.467 [2024-11-28 06:43:05.112608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.467 [2024-11-28 06:43:05.112617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:54.467 [2024-11-28 06:43:05.112626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:17:54.467 [2024-11-28 06:43:05.112633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.467 [2024-11-28 06:43:05.113631] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:54.467 [2024-11-28 06:43:05.115862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.467 [2024-11-28 06:43:05.115900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:54.467 [2024-11-28 06:43:05.115910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.237 ms 00:17:54.467 [2024-11-28 06:43:05.115920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.467 [2024-11-28 06:43:05.115967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.467 [2024-11-28 06:43:05.115977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:54.467 [2024-11-28 06:43:05.115984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:54.467 [2024-11-28 06:43:05.115991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.467 [2024-11-28 06:43:05.120670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.467 [2024-11-28 06:43:05.120700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:54.467 [2024-11-28 06:43:05.120729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.631 ms 00:17:54.467 [2024-11-28 06:43:05.120736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.467 [2024-11-28 06:43:05.120800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.467 [2024-11-28 06:43:05.120814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:54.467 [2024-11-28 06:43:05.120822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:54.467 [2024-11-28 06:43:05.120830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.467 [2024-11-28 06:43:05.120876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.467 [2024-11-28 06:43:05.120886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:54.467 [2024-11-28 06:43:05.120894] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:54.467 [2024-11-28 06:43:05.120904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.467 [2024-11-28 06:43:05.120927] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:54.467 [2024-11-28 06:43:05.122210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.467 [2024-11-28 06:43:05.122236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:54.467 [2024-11-28 06:43:05.122245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.288 ms 00:17:54.468 [2024-11-28 06:43:05.122256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.468 [2024-11-28 06:43:05.122285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.468 [2024-11-28 06:43:05.122294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:54.468 [2024-11-28 06:43:05.122304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:54.468 [2024-11-28 06:43:05.122312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.468 [2024-11-28 06:43:05.122331] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:54.468 [2024-11-28 06:43:05.122348] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:54.468 [2024-11-28 06:43:05.122379] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:54.468 [2024-11-28 06:43:05.122395] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:54.468 [2024-11-28 06:43:05.122469] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:54.468 [2024-11-28 06:43:05.122482] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:54.468 [2024-11-28 06:43:05.122492] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:54.468 [2024-11-28 06:43:05.122505] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:54.468 [2024-11-28 06:43:05.122515] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:54.468 [2024-11-28 06:43:05.122524] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:54.468 [2024-11-28 06:43:05.122534] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:54.468 [2024-11-28 06:43:05.122541] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:54.468 [2024-11-28 06:43:05.122550] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:54.468 [2024-11-28 06:43:05.122558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.468 [2024-11-28 06:43:05.122565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:54.468 [2024-11-28 06:43:05.122573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:17:54.468 [2024-11-28 06:43:05.122585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.468 [2024-11-28 06:43:05.122643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.468 [2024-11-28 06:43:05.122654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:54.468 [2024-11-28 06:43:05.122662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:17:54.468 [2024-11-28 06:43:05.122671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.468 [2024-11-28 06:43:05.122759] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:54.468 [2024-11-28 06:43:05.122770] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:54.468 [2024-11-28 06:43:05.122779] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:54.468 [2024-11-28 06:43:05.122792] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.468 [2024-11-28 06:43:05.122801] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:54.468 [2024-11-28 06:43:05.122808] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:54.468 [2024-11-28 06:43:05.122815] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:54.468 [2024-11-28 06:43:05.122822] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:54.468 [2024-11-28 06:43:05.122829] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:54.468 [2024-11-28 06:43:05.122836] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:54.468 [2024-11-28 06:43:05.122842] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:54.468 [2024-11-28 06:43:05.122850] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:54.468 [2024-11-28 06:43:05.122858] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:54.468 [2024-11-28 06:43:05.122866] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:54.468 [2024-11-28 06:43:05.122874] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:54.468 [2024-11-28 06:43:05.122881] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.468 [2024-11-28 06:43:05.122888] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:54.468 [2024-11-28 06:43:05.122896] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:54.468 [2024-11-28 06:43:05.122903] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.468 [2024-11-28 06:43:05.122912] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:54.468 [2024-11-28 06:43:05.122919] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:54.468 [2024-11-28 06:43:05.122926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:54.468 [2024-11-28 06:43:05.122934] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:54.468 [2024-11-28 06:43:05.122942] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:54.468 [2024-11-28 06:43:05.122949] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:54.468 [2024-11-28 06:43:05.122956] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:54.468 [2024-11-28 06:43:05.122963] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:54.468 [2024-11-28 06:43:05.122970] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:54.468 [2024-11-28 06:43:05.122978] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:54.468 [2024-11-28 06:43:05.122985] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:54.468 [2024-11-28 06:43:05.122992] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:54.468 [2024-11-28 06:43:05.122999] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:54.468 [2024-11-28 06:43:05.123010] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:54.468 [2024-11-28 06:43:05.123017] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:54.468 [2024-11-28 06:43:05.123024] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:54.468 [2024-11-28 06:43:05.123035] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:54.468 [2024-11-28 06:43:05.123042] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:54.468 [2024-11-28 06:43:05.123050] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:54.468 [2024-11-28 06:43:05.123057] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:54.468 [2024-11-28 06:43:05.123064] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:54.468 [2024-11-28 06:43:05.123071] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:54.468 [2024-11-28 06:43:05.123079] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:54.468 [2024-11-28 06:43:05.123090] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:54.468 [2024-11-28 06:43:05.123102] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.468 [2024-11-28 06:43:05.123110] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:54.468 [2024-11-28 06:43:05.123119] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:54.468 [2024-11-28 06:43:05.123126] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:54.468 [2024-11-28 06:43:05.123135] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:54.468 [2024-11-28 06:43:05.123142] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:54.468 [2024-11-28 06:43:05.123149] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:54.468 [2024-11-28 06:43:05.123158] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:54.468 [2024-11-28 06:43:05.123169] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:54.468 [2024-11-28 06:43:05.123179] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:54.468 [2024-11-28 06:43:05.123187] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:54.468 [2024-11-28 06:43:05.123195] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:54.468 [2024-11-28 06:43:05.123204] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:54.468 [2024-11-28 06:43:05.123212] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:54.468 [2024-11-28 06:43:05.123220] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:54.468 [2024-11-28 06:43:05.123228] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:54.468 [2024-11-28 06:43:05.123235] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:54.468 [2024-11-28 06:43:05.123242] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:54.468 [2024-11-28 06:43:05.123248] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:54.468 [2024-11-28 06:43:05.123255] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:54.468 [2024-11-28 06:43:05.123262] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:54.468 [2024-11-28 06:43:05.123269] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:54.468 [2024-11-28 06:43:05.123275] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:54.468 [2024-11-28 06:43:05.123287] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:54.468 [2024-11-28 06:43:05.123298] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:54.468 [2024-11-28 06:43:05.123305] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:54.468 [2024-11-28 06:43:05.123312] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:54.468 [2024-11-28 06:43:05.123318] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:54.468 [2024-11-28 06:43:05.123326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.468 [2024-11-28 06:43:05.123333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:54.468 [2024-11-28 06:43:05.123340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.628 ms 00:17:54.468 [2024-11-28 06:43:05.123352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.468 [2024-11-28 06:43:05.129185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.468 [2024-11-28 06:43:05.129320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:54.468 [2024-11-28 06:43:05.129335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.794 ms 00:17:54.468 [2024-11-28 06:43:05.129344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.468 [2024-11-28 06:43:05.129426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.468 [2024-11-28 06:43:05.129435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:54.468 [2024-11-28 06:43:05.129442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:54.468 [2024-11-28 06:43:05.129450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.468 [2024-11-28 06:43:05.145954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.468 [2024-11-28 06:43:05.145993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:54.468 [2024-11-28 06:43:05.146005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.466 ms 00:17:54.468 [2024-11-28 06:43:05.146012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.468 [2024-11-28 06:43:05.146050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.468 [2024-11-28 06:43:05.146061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:54.468 [2024-11-28 06:43:05.146069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:54.468 [2024-11-28 06:43:05.146082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.468 [2024-11-28 06:43:05.146410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.468 [2024-11-28 06:43:05.146426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:54.468 [2024-11-28 06:43:05.146434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:17:54.468 [2024-11-28 06:43:05.146442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.468 [2024-11-28 06:43:05.146550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.468 [2024-11-28 06:43:05.146560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:54.468 [2024-11-28 06:43:05.146571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:17:54.468 [2024-11-28 06:43:05.146579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.468 [2024-11-28 06:43:05.151910] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.468 [2024-11-28 06:43:05.152043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:54.468 [2024-11-28 06:43:05.152060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.306 ms 00:17:54.469 [2024-11-28 06:43:05.152068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.469 [2024-11-28 06:43:05.154497] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:54.469 [2024-11-28 06:43:05.154533] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:54.469 [2024-11-28 06:43:05.154544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.469 [2024-11-28 06:43:05.154552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:54.469 [2024-11-28 06:43:05.154561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.393 ms 00:17:54.469 [2024-11-28 06:43:05.154570] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.469 [2024-11-28 06:43:05.169725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.469 [2024-11-28 06:43:05.169756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:54.469 [2024-11-28 06:43:05.169766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.116 ms 00:17:54.469 [2024-11-28 06:43:05.169775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.469 [2024-11-28 06:43:05.171649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.469 [2024-11-28 06:43:05.171781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:54.469 [2024-11-28 06:43:05.171796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.839 ms 00:17:54.469 [2024-11-28 06:43:05.171803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.469 [2024-11-28 06:43:05.173637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.469 [2024-11-28 06:43:05.173661] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:54.469 [2024-11-28 06:43:05.173669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.807 ms 00:17:54.469 [2024-11-28 06:43:05.173676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.469 [2024-11-28 06:43:05.173874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.469 [2024-11-28 06:43:05.173889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:54.469 [2024-11-28 06:43:05.173898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:17:54.469 [2024-11-28 06:43:05.173907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.469 [2024-11-28 06:43:05.191810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.469 [2024-11-28 06:43:05.191844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:54.469 [2024-11-28 06:43:05.191859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.888 ms 00:17:54.469 [2024-11-28 06:43:05.191867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.469 [2024-11-28 06:43:05.199045] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:54.469 [2024-11-28 06:43:05.201580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.469 [2024-11-28 06:43:05.201608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:54.469 [2024-11-28 06:43:05.201619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.669 ms 00:17:54.469 [2024-11-28 06:43:05.201627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.469 [2024-11-28 06:43:05.201685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.469 [2024-11-28 06:43:05.201695] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:54.469 [2024-11-28 06:43:05.201722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:54.469 [2024-11-28 06:43:05.201730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.469 [2024-11-28 06:43:05.201779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.469 [2024-11-28 06:43:05.201792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:54.469 [2024-11-28 06:43:05.201804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:54.469 [2024-11-28 06:43:05.201811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.469 [2024-11-28 06:43:05.202996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.469 [2024-11-28 06:43:05.203021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:54.469 [2024-11-28 06:43:05.203035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.169 ms 00:17:54.469 [2024-11-28 06:43:05.203042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.469 [2024-11-28 06:43:05.203069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.469 [2024-11-28 06:43:05.203077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:54.469 [2024-11-28 06:43:05.203088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:54.469 [2024-11-28 06:43:05.203095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.469 [2024-11-28 06:43:05.203139] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:54.469 [2024-11-28 06:43:05.203155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.469 [2024-11-28 06:43:05.203163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:54.469 [2024-11-28 06:43:05.203170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:54.469 [2024-11-28 06:43:05.203177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.469 [2024-11-28 06:43:05.206949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.469 [2024-11-28 06:43:05.206980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:54.469 [2024-11-28 06:43:05.206989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.756 ms 00:17:54.469 [2024-11-28 06:43:05.207002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.469 [2024-11-28 06:43:05.207062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.469 [2024-11-28 06:43:05.207071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:54.469 [2024-11-28 06:43:05.207083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:54.469 [2024-11-28 06:43:05.207091] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.469 [2024-11-28 06:43:05.207990] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 95.422 ms, result 0 00:17:55.843  [2024-11-28T06:43:07.549Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-28T06:43:08.484Z] Copying: 46/1024 [MB] (28 MBps) [2024-11-28T06:43:09.420Z] Copying: 75/1024 [MB] (28 MBps) [2024-11-28T06:43:10.354Z] Copying: 105/1024 [MB] (30 MBps) [2024-11-28T06:43:11.292Z] Copying: 134/1024 [MB] (28 MBps) [2024-11-28T06:43:12.226Z] Copying: 154/1024 [MB] (20 MBps) [2024-11-28T06:43:13.601Z] Copying: 198/1024 [MB] (44 MBps) [2024-11-28T06:43:14.535Z] Copying: 242/1024 [MB] (43 MBps) [2024-11-28T06:43:15.466Z] Copying: 287/1024 [MB] (44 MBps) [2024-11-28T06:43:16.399Z] Copying: 330/1024 [MB] (43 MBps) [2024-11-28T06:43:17.335Z] Copying: 374/1024 [MB] (43 MBps) [2024-11-28T06:43:18.268Z] Copying: 418/1024 [MB] (44 MBps) [2024-11-28T06:43:19.642Z] Copying: 463/1024 [MB] (44 MBps) [2024-11-28T06:43:20.577Z] Copying: 506/1024 [MB] (43 MBps) [2024-11-28T06:43:21.510Z] Copying: 551/1024 [MB] (44 MBps) [2024-11-28T06:43:22.444Z] Copying: 595/1024 [MB] (44 MBps) [2024-11-28T06:43:23.378Z] Copying: 638/1024 [MB] (42 MBps) [2024-11-28T06:43:24.312Z] Copying: 681/1024 [MB] (43 MBps) [2024-11-28T06:43:25.245Z] Copying: 725/1024 [MB] (43 MBps) [2024-11-28T06:43:26.618Z] Copying: 769/1024 [MB] (44 MBps) [2024-11-28T06:43:27.306Z] Copying: 813/1024 [MB] (43 MBps) [2024-11-28T06:43:28.240Z] Copying: 857/1024 [MB] (43 MBps) [2024-11-28T06:43:29.612Z] Copying: 903/1024 [MB] (46 MBps) [2024-11-28T06:43:30.544Z] Copying: 948/1024 [MB] (45 MBps) [2024-11-28T06:43:31.478Z] Copying: 993/1024 [MB] (44 MBps) [2024-11-28T06:43:31.736Z] Copying: 1023/1024 [MB] (30 MBps) [2024-11-28T06:43:31.736Z] Copying: 1024/1024 [MB] (average 38 MBps)[2024-11-28 06:43:31.726846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.966 [2024-11-28 06:43:31.726895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:20.966 [2024-11-28 06:43:31.726915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:20.966 [2024-11-28 06:43:31.726927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.966 [2024-11-28 06:43:31.730059] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:20.966 [2024-11-28 06:43:31.733798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.966 [2024-11-28 06:43:31.733849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:20.966 [2024-11-28 06:43:31.733860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.594 ms 00:18:20.966 [2024-11-28 06:43:31.733867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.226 [2024-11-28 06:43:31.744404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.226 [2024-11-28 06:43:31.744444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:21.226 [2024-11-28 06:43:31.744454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.545 ms 00:18:21.226 [2024-11-28 06:43:31.744461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.226 [2024-11-28 06:43:31.764268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.226 [2024-11-28 06:43:31.764300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:21.226 [2024-11-28 06:43:31.764310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.786 ms 00:18:21.226 [2024-11-28 06:43:31.764318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.226 [2024-11-28 06:43:31.770438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.226 [2024-11-28 06:43:31.770466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:21.226 [2024-11-28 06:43:31.770481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.093 ms 00:18:21.226 [2024-11-28 06:43:31.770490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.226 [2024-11-28 06:43:31.771680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.226 [2024-11-28 06:43:31.771729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:21.226 [2024-11-28 06:43:31.771738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.131 ms 00:18:21.226 [2024-11-28 06:43:31.771745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.226 [2024-11-28 06:43:31.775263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.226 [2024-11-28 06:43:31.775295] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:21.226 [2024-11-28 06:43:31.775305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.489 ms 00:18:21.226 [2024-11-28 06:43:31.775312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.226 [2024-11-28 06:43:31.829087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.226 [2024-11-28 06:43:31.829241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:21.226 [2024-11-28 06:43:31.829260] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.741 ms 00:18:21.226 [2024-11-28 06:43:31.829268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.226 [2024-11-28 06:43:31.830934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.226 [2024-11-28 06:43:31.830964] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:21.226 [2024-11-28 06:43:31.830974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.647 ms 00:18:21.226 [2024-11-28 06:43:31.830981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.227 [2024-11-28 06:43:31.832137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.227 [2024-11-28 06:43:31.832167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:21.227 [2024-11-28 06:43:31.832175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.126 ms 00:18:21.227 [2024-11-28 06:43:31.832182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.227 [2024-11-28 06:43:31.833281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.227 [2024-11-28 06:43:31.833325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:21.227 [2024-11-28 06:43:31.833337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.070 ms 00:18:21.227 [2024-11-28 06:43:31.833344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.227 [2024-11-28 06:43:31.834233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.227 [2024-11-28 06:43:31.834266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:21.227 [2024-11-28 06:43:31.834275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.831 ms 00:18:21.227 [2024-11-28 06:43:31.834283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.227 [2024-11-28 06:43:31.834309] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:21.227 [2024-11-28 06:43:31.834324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 118272 / 261120 wr_cnt: 1 state: open 00:18:21.227 [2024-11-28 06:43:31.834334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:21.227 [2024-11-28 06:43:31.834951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:21.228 [2024-11-28 06:43:31.834959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:21.228 [2024-11-28 06:43:31.834966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:21.228 [2024-11-28 06:43:31.834974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:21.228 [2024-11-28 06:43:31.834981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:21.228 [2024-11-28 06:43:31.834988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:21.228 [2024-11-28 06:43:31.834996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:21.228 [2024-11-28 06:43:31.835003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:21.228 [2024-11-28 06:43:31.835010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:21.228 [2024-11-28 06:43:31.835017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:21.228 [2024-11-28 06:43:31.835025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:21.228 [2024-11-28 06:43:31.835033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:21.228 [2024-11-28 06:43:31.835040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:21.228 [2024-11-28 06:43:31.835047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:21.228 [2024-11-28 06:43:31.835054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:21.228 [2024-11-28 06:43:31.835062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:21.228 [2024-11-28 06:43:31.835070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:21.228 [2024-11-28 06:43:31.835077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:21.228 [2024-11-28 06:43:31.835084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:21.228 [2024-11-28 06:43:31.835092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:21.228 [2024-11-28 06:43:31.835099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:21.228 [2024-11-28 06:43:31.835107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:21.228 [2024-11-28 06:43:31.835122] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:21.228 [2024-11-28 06:43:31.835129] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97e9e6bb-d5b7-4b0c-8aa2-bb5f5045248a 00:18:21.228 [2024-11-28 06:43:31.835137] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 118272 00:18:21.228 [2024-11-28 06:43:31.835144] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 119232 00:18:21.228 [2024-11-28 06:43:31.835150] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 118272 00:18:21.228 [2024-11-28 06:43:31.835158] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0081 00:18:21.228 [2024-11-28 06:43:31.835168] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:21.228 [2024-11-28 06:43:31.835176] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:21.228 [2024-11-28 06:43:31.835183] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:21.228 [2024-11-28 06:43:31.835189] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:21.228 [2024-11-28 06:43:31.835195] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:21.228 [2024-11-28 06:43:31.835202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.228 [2024-11-28 06:43:31.835212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:21.228 [2024-11-28 06:43:31.835220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.894 ms 00:18:21.228 [2024-11-28 06:43:31.835228] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.228 [2024-11-28 06:43:31.836819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.228 [2024-11-28 06:43:31.836919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:21.228 [2024-11-28 06:43:31.836977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.577 ms 00:18:21.228 [2024-11-28 06:43:31.836999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.228 [2024-11-28 06:43:31.837075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.228 [2024-11-28 06:43:31.837099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:21.228 [2024-11-28 06:43:31.837238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:21.228 [2024-11-28 06:43:31.837261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.228 [2024-11-28 06:43:31.842090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.228 [2024-11-28 06:43:31.842213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:21.228 [2024-11-28 06:43:31.842269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.228 [2024-11-28 06:43:31.842314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.228 [2024-11-28 06:43:31.842380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.228 [2024-11-28 06:43:31.842428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:21.228 [2024-11-28 06:43:31.842452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.228 [2024-11-28 06:43:31.842537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.228 [2024-11-28 06:43:31.842617] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.228 [2024-11-28 06:43:31.842754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:21.228 [2024-11-28 06:43:31.842787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.228 [2024-11-28 06:43:31.842845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.228 [2024-11-28 06:43:31.842878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.228 [2024-11-28 06:43:31.842904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:21.228 [2024-11-28 06:43:31.842954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.228 [2024-11-28 06:43:31.843041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.228 [2024-11-28 06:43:31.851236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.228 [2024-11-28 06:43:31.851382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:21.228 [2024-11-28 06:43:31.851484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.228 [2024-11-28 06:43:31.851505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.228 [2024-11-28 06:43:31.855033] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.228 [2024-11-28 06:43:31.855140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:21.228 [2024-11-28 06:43:31.855194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.228 [2024-11-28 06:43:31.855216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.228 [2024-11-28 06:43:31.855262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.228 [2024-11-28 06:43:31.855285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:21.228 [2024-11-28 06:43:31.855434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.228 [2024-11-28 06:43:31.855449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.228 [2024-11-28 06:43:31.855490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.228 [2024-11-28 06:43:31.855498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:21.228 [2024-11-28 06:43:31.855506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.228 [2024-11-28 06:43:31.855514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.228 [2024-11-28 06:43:31.855576] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.228 [2024-11-28 06:43:31.855586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:21.228 [2024-11-28 06:43:31.855594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.228 [2024-11-28 06:43:31.855601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.228 [2024-11-28 06:43:31.855630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.228 [2024-11-28 06:43:31.855643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:21.228 [2024-11-28 06:43:31.855651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.228 [2024-11-28 06:43:31.855658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.228 [2024-11-28 06:43:31.855699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.228 [2024-11-28 06:43:31.855728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:21.228 [2024-11-28 06:43:31.855736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.228 [2024-11-28 06:43:31.855743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.228 [2024-11-28 06:43:31.855786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.228 [2024-11-28 06:43:31.855800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:21.228 [2024-11-28 06:43:31.855808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.228 [2024-11-28 06:43:31.855815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.228 [2024-11-28 06:43:31.855920] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 130.964 ms, result 0 00:18:22.163 00:18:22.163 00:18:22.163 06:43:32 -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:18:22.163 [2024-11-28 06:43:32.882448] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:22.163 [2024-11-28 06:43:32.882549] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84648 ] 00:18:22.421 [2024-11-28 06:43:33.017518] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.421 [2024-11-28 06:43:33.047263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.421 [2024-11-28 06:43:33.130436] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:22.421 [2024-11-28 06:43:33.130504] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:22.681 [2024-11-28 06:43:33.276165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.681 [2024-11-28 06:43:33.276210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:22.681 [2024-11-28 06:43:33.276223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:22.681 [2024-11-28 06:43:33.276230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.681 [2024-11-28 06:43:33.276279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.681 [2024-11-28 06:43:33.276289] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:22.681 [2024-11-28 06:43:33.276297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:22.681 [2024-11-28 06:43:33.276308] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.681 [2024-11-28 06:43:33.276325] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:22.681 [2024-11-28 06:43:33.276556] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:22.681 [2024-11-28 06:43:33.276570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.681 [2024-11-28 06:43:33.276579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:22.681 [2024-11-28 06:43:33.276587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:18:22.681 [2024-11-28 06:43:33.276595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.681 [2024-11-28 06:43:33.277638] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:22.681 [2024-11-28 06:43:33.279800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.681 [2024-11-28 06:43:33.279833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:22.681 [2024-11-28 06:43:33.279846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.163 ms 00:18:22.681 [2024-11-28 06:43:33.279855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.681 [2024-11-28 06:43:33.279903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.681 [2024-11-28 06:43:33.279914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:22.681 [2024-11-28 06:43:33.279921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:22.681 [2024-11-28 06:43:33.279928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.681 [2024-11-28 06:43:33.284575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.681 [2024-11-28 06:43:33.284609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:22.681 [2024-11-28 06:43:33.284618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.593 ms 00:18:22.681 [2024-11-28 06:43:33.284627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.681 [2024-11-28 06:43:33.284692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.681 [2024-11-28 06:43:33.284701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:22.681 [2024-11-28 06:43:33.284733] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:18:22.681 [2024-11-28 06:43:33.284744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.681 [2024-11-28 06:43:33.284784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.681 [2024-11-28 06:43:33.284794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:22.681 [2024-11-28 06:43:33.284802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:22.681 [2024-11-28 06:43:33.284812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.681 [2024-11-28 06:43:33.284833] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:22.681 [2024-11-28 06:43:33.286110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.681 [2024-11-28 06:43:33.286135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:22.681 [2024-11-28 06:43:33.286144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.282 ms 00:18:22.681 [2024-11-28 06:43:33.286151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.681 [2024-11-28 06:43:33.286182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.681 [2024-11-28 06:43:33.286193] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:22.681 [2024-11-28 06:43:33.286203] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:22.681 [2024-11-28 06:43:33.286210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.681 [2024-11-28 06:43:33.286229] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:22.681 [2024-11-28 06:43:33.286246] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:22.681 [2024-11-28 06:43:33.286277] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:22.681 [2024-11-28 06:43:33.286294] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:22.681 [2024-11-28 06:43:33.286365] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:22.681 [2024-11-28 06:43:33.286376] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:22.681 [2024-11-28 06:43:33.286386] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:22.681 [2024-11-28 06:43:33.286397] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:22.681 [2024-11-28 06:43:33.286406] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:22.681 [2024-11-28 06:43:33.286414] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:22.681 [2024-11-28 06:43:33.286423] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:22.681 [2024-11-28 06:43:33.286431] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:22.681 [2024-11-28 06:43:33.286441] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:22.681 [2024-11-28 06:43:33.286448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.681 [2024-11-28 06:43:33.286455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:22.681 [2024-11-28 06:43:33.286462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:18:22.681 [2024-11-28 06:43:33.286472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.681 [2024-11-28 06:43:33.286530] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.681 [2024-11-28 06:43:33.286537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:22.681 [2024-11-28 06:43:33.286544] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:18:22.681 [2024-11-28 06:43:33.286551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.681 [2024-11-28 06:43:33.286618] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:22.681 [2024-11-28 06:43:33.286633] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:22.681 [2024-11-28 06:43:33.286640] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:22.681 [2024-11-28 06:43:33.286648] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.681 [2024-11-28 06:43:33.286657] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:22.681 [2024-11-28 06:43:33.286664] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:22.681 [2024-11-28 06:43:33.286670] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:22.681 [2024-11-28 06:43:33.286680] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:22.681 [2024-11-28 06:43:33.286687] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:22.681 [2024-11-28 06:43:33.286693] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:22.681 [2024-11-28 06:43:33.286700] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:22.681 [2024-11-28 06:43:33.286725] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:22.681 [2024-11-28 06:43:33.286732] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:22.681 [2024-11-28 06:43:33.286738] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:22.681 [2024-11-28 06:43:33.286745] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:22.681 [2024-11-28 06:43:33.286751] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.681 [2024-11-28 06:43:33.286759] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:22.681 [2024-11-28 06:43:33.286766] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:22.681 [2024-11-28 06:43:33.286774] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.681 [2024-11-28 06:43:33.286781] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:22.681 [2024-11-28 06:43:33.286790] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:22.681 [2024-11-28 06:43:33.286798] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:22.681 [2024-11-28 06:43:33.286805] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:22.682 [2024-11-28 06:43:33.286814] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:22.682 [2024-11-28 06:43:33.286822] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:22.682 [2024-11-28 06:43:33.286829] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:22.682 [2024-11-28 06:43:33.286836] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:22.682 [2024-11-28 06:43:33.286843] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:22.682 [2024-11-28 06:43:33.286850] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:22.682 [2024-11-28 06:43:33.286857] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:22.682 [2024-11-28 06:43:33.286864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:22.682 [2024-11-28 06:43:33.286872] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:22.682 [2024-11-28 06:43:33.286879] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:22.682 [2024-11-28 06:43:33.286887] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:22.682 [2024-11-28 06:43:33.286894] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:22.682 [2024-11-28 06:43:33.286902] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:22.682 [2024-11-28 06:43:33.286909] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:22.682 [2024-11-28 06:43:33.286917] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:22.682 [2024-11-28 06:43:33.286924] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:22.682 [2024-11-28 06:43:33.286933] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:22.682 [2024-11-28 06:43:33.286940] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:22.682 [2024-11-28 06:43:33.286948] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:22.682 [2024-11-28 06:43:33.286956] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:22.682 [2024-11-28 06:43:33.286967] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.682 [2024-11-28 06:43:33.286975] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:22.682 [2024-11-28 06:43:33.286983] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:22.682 [2024-11-28 06:43:33.286990] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:22.682 [2024-11-28 06:43:33.286998] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:22.682 [2024-11-28 06:43:33.287005] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:22.682 [2024-11-28 06:43:33.287012] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:22.682 [2024-11-28 06:43:33.287020] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:22.682 [2024-11-28 06:43:33.287030] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:22.682 [2024-11-28 06:43:33.287039] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:22.682 [2024-11-28 06:43:33.287049] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:22.682 [2024-11-28 06:43:33.287056] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:22.682 [2024-11-28 06:43:33.287066] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:22.682 [2024-11-28 06:43:33.287074] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:22.682 [2024-11-28 06:43:33.287082] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:22.682 [2024-11-28 06:43:33.287090] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:22.682 [2024-11-28 06:43:33.287098] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:22.682 [2024-11-28 06:43:33.287105] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:22.682 [2024-11-28 06:43:33.287114] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:22.682 [2024-11-28 06:43:33.287122] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:22.682 [2024-11-28 06:43:33.287130] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:22.682 [2024-11-28 06:43:33.287138] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:22.682 [2024-11-28 06:43:33.287147] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:22.682 [2024-11-28 06:43:33.287156] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:22.682 [2024-11-28 06:43:33.287165] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:22.682 [2024-11-28 06:43:33.287173] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:22.682 [2024-11-28 06:43:33.287182] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:22.682 [2024-11-28 06:43:33.287190] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:22.682 [2024-11-28 06:43:33.287200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.682 [2024-11-28 06:43:33.287207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:22.682 [2024-11-28 06:43:33.287214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:18:22.682 [2024-11-28 06:43:33.287227] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.682 [2024-11-28 06:43:33.293115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.682 [2024-11-28 06:43:33.293144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:22.682 [2024-11-28 06:43:33.293153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.853 ms 00:18:22.682 [2024-11-28 06:43:33.293164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.682 [2024-11-28 06:43:33.293258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.682 [2024-11-28 06:43:33.293268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:22.682 [2024-11-28 06:43:33.293277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:18:22.682 [2024-11-28 06:43:33.293284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.682 [2024-11-28 06:43:33.318132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.682 [2024-11-28 06:43:33.318171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:22.682 [2024-11-28 06:43:33.318183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.807 ms 00:18:22.682 [2024-11-28 06:43:33.318190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.682 [2024-11-28 06:43:33.318235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.682 [2024-11-28 06:43:33.318245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:22.682 [2024-11-28 06:43:33.318253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:22.682 [2024-11-28 06:43:33.318263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.682 [2024-11-28 06:43:33.318607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.682 [2024-11-28 06:43:33.318646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:22.682 [2024-11-28 06:43:33.318655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:18:22.682 [2024-11-28 06:43:33.318663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.682 [2024-11-28 06:43:33.318784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.682 [2024-11-28 06:43:33.318805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:22.682 [2024-11-28 06:43:33.318814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:18:22.682 [2024-11-28 06:43:33.318822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.682 [2024-11-28 06:43:33.323962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.682 [2024-11-28 06:43:33.324106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:22.682 [2024-11-28 06:43:33.324121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.111 ms 00:18:22.682 [2024-11-28 06:43:33.324129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.682 [2024-11-28 06:43:33.326365] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:18:22.682 [2024-11-28 06:43:33.326401] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:22.682 [2024-11-28 06:43:33.326411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.682 [2024-11-28 06:43:33.326423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:22.682 [2024-11-28 06:43:33.326431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.202 ms 00:18:22.682 [2024-11-28 06:43:33.326438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.682 [2024-11-28 06:43:33.340951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.682 [2024-11-28 06:43:33.340982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:22.682 [2024-11-28 06:43:33.340993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.478 ms 00:18:22.682 [2024-11-28 06:43:33.341000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.682 [2024-11-28 06:43:33.342971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.682 [2024-11-28 06:43:33.343087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:22.682 [2024-11-28 06:43:33.343101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.936 ms 00:18:22.682 [2024-11-28 06:43:33.343108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.682 [2024-11-28 06:43:33.344455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.682 [2024-11-28 06:43:33.344486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:22.682 [2024-11-28 06:43:33.344495] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.320 ms 00:18:22.682 [2024-11-28 06:43:33.344501] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.682 [2024-11-28 06:43:33.344697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.682 [2024-11-28 06:43:33.344719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:22.682 [2024-11-28 06:43:33.344730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:18:22.683 [2024-11-28 06:43:33.344740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.683 [2024-11-28 06:43:33.362498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.683 [2024-11-28 06:43:33.362642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:22.683 [2024-11-28 06:43:33.362666] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.742 ms 00:18:22.683 [2024-11-28 06:43:33.362674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.683 [2024-11-28 06:43:33.370029] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:22.683 [2024-11-28 06:43:33.372407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.683 [2024-11-28 06:43:33.372436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:22.683 [2024-11-28 06:43:33.372452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.678 ms 00:18:22.683 [2024-11-28 06:43:33.372460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.683 [2024-11-28 06:43:33.372520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.683 [2024-11-28 06:43:33.372534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:22.683 [2024-11-28 06:43:33.372543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:22.683 [2024-11-28 06:43:33.372551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.683 [2024-11-28 06:43:33.373643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.683 [2024-11-28 06:43:33.373782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:22.683 [2024-11-28 06:43:33.373797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.065 ms 00:18:22.683 [2024-11-28 06:43:33.373806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.683 [2024-11-28 06:43:33.375019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.683 [2024-11-28 06:43:33.375047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:22.683 [2024-11-28 06:43:33.375056] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.192 ms 00:18:22.683 [2024-11-28 06:43:33.375064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.683 [2024-11-28 06:43:33.375109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.683 [2024-11-28 06:43:33.375118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:22.683 [2024-11-28 06:43:33.375132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:22.683 [2024-11-28 06:43:33.375142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.683 [2024-11-28 06:43:33.375175] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:22.683 [2024-11-28 06:43:33.375185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.683 [2024-11-28 06:43:33.375197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:22.683 [2024-11-28 06:43:33.375205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:22.683 [2024-11-28 06:43:33.375213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.683 [2024-11-28 06:43:33.378551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.683 [2024-11-28 06:43:33.378584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:22.683 [2024-11-28 06:43:33.378603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.321 ms 00:18:22.683 [2024-11-28 06:43:33.378613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.683 [2024-11-28 06:43:33.378676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.683 [2024-11-28 06:43:33.378686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:22.683 [2024-11-28 06:43:33.378700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:22.683 [2024-11-28 06:43:33.378719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.683 [2024-11-28 06:43:33.384602] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 107.078 ms, result 0 00:18:24.057  [2024-11-28T06:43:35.762Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-28T06:43:36.698Z] Copying: 56/1024 [MB] (30 MBps) [2024-11-28T06:43:37.631Z] Copying: 80/1024 [MB] (23 MBps) [2024-11-28T06:43:38.567Z] Copying: 114/1024 [MB] (34 MBps) [2024-11-28T06:43:39.942Z] Copying: 134/1024 [MB] (19 MBps) [2024-11-28T06:43:40.877Z] Copying: 156/1024 [MB] (22 MBps) [2024-11-28T06:43:41.810Z] Copying: 173/1024 [MB] (16 MBps) [2024-11-28T06:43:42.744Z] Copying: 195/1024 [MB] (21 MBps) [2024-11-28T06:43:43.678Z] Copying: 222/1024 [MB] (26 MBps) [2024-11-28T06:43:44.612Z] Copying: 245/1024 [MB] (23 MBps) [2024-11-28T06:43:45.990Z] Copying: 274/1024 [MB] (29 MBps) [2024-11-28T06:43:46.926Z] Copying: 304/1024 [MB] (29 MBps) [2024-11-28T06:43:47.863Z] Copying: 344/1024 [MB] (40 MBps) [2024-11-28T06:43:48.797Z] Copying: 372/1024 [MB] (27 MBps) [2024-11-28T06:43:49.732Z] Copying: 398/1024 [MB] (26 MBps) [2024-11-28T06:43:50.665Z] Copying: 423/1024 [MB] (24 MBps) [2024-11-28T06:43:51.599Z] Copying: 452/1024 [MB] (29 MBps) [2024-11-28T06:43:53.038Z] Copying: 499/1024 [MB] (47 MBps) [2024-11-28T06:43:53.604Z] Copying: 547/1024 [MB] (47 MBps) [2024-11-28T06:43:54.978Z] Copying: 592/1024 [MB] (45 MBps) [2024-11-28T06:43:55.921Z] Copying: 627/1024 [MB] (35 MBps) [2024-11-28T06:43:56.856Z] Copying: 662/1024 [MB] (34 MBps) [2024-11-28T06:43:57.792Z] Copying: 712/1024 [MB] (50 MBps) [2024-11-28T06:43:58.726Z] Copying: 758/1024 [MB] (45 MBps) [2024-11-28T06:43:59.658Z] Copying: 803/1024 [MB] (45 MBps) [2024-11-28T06:44:00.592Z] Copying: 850/1024 [MB] (47 MBps) [2024-11-28T06:44:01.964Z] Copying: 896/1024 [MB] (45 MBps) [2024-11-28T06:44:02.897Z] Copying: 943/1024 [MB] (46 MBps) [2024-11-28T06:44:03.463Z] Copying: 990/1024 [MB] (47 MBps) [2024-11-28T06:44:04.033Z] Copying: 1024/1024 [MB] (average 34 MBps)[2024-11-28 06:44:03.847072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.263 [2024-11-28 06:44:03.847150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:53.263 [2024-11-28 06:44:03.847170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:53.263 [2024-11-28 06:44:03.847182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.263 [2024-11-28 06:44:03.847214] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:53.263 [2024-11-28 06:44:03.847770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.263 [2024-11-28 06:44:03.847802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:53.263 [2024-11-28 06:44:03.847816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:18:53.263 [2024-11-28 06:44:03.847828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.263 [2024-11-28 06:44:03.848171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.263 [2024-11-28 06:44:03.848192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:53.263 [2024-11-28 06:44:03.848205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:18:53.263 [2024-11-28 06:44:03.848218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.263 [2024-11-28 06:44:03.866864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.263 [2024-11-28 06:44:03.866896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:53.263 [2024-11-28 06:44:03.866906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.623 ms 00:18:53.263 [2024-11-28 06:44:03.866913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.263 [2024-11-28 06:44:03.876234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.263 [2024-11-28 06:44:03.876265] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:53.263 [2024-11-28 06:44:03.876280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.288 ms 00:18:53.263 [2024-11-28 06:44:03.876288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.263 [2024-11-28 06:44:03.877573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.263 [2024-11-28 06:44:03.877607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:53.263 [2024-11-28 06:44:03.877616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.223 ms 00:18:53.264 [2024-11-28 06:44:03.877624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.264 [2024-11-28 06:44:03.881085] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.264 [2024-11-28 06:44:03.881116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:53.264 [2024-11-28 06:44:03.881126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.409 ms 00:18:53.264 [2024-11-28 06:44:03.881134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.264 [2024-11-28 06:44:03.977083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.264 [2024-11-28 06:44:03.977113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:53.264 [2024-11-28 06:44:03.977128] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.917 ms 00:18:53.264 [2024-11-28 06:44:03.977136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.264 [2024-11-28 06:44:03.978800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.264 [2024-11-28 06:44:03.978829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:53.264 [2024-11-28 06:44:03.978839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.649 ms 00:18:53.264 [2024-11-28 06:44:03.978847] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.264 [2024-11-28 06:44:03.980238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.264 [2024-11-28 06:44:03.980388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:53.264 [2024-11-28 06:44:03.980402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.363 ms 00:18:53.264 [2024-11-28 06:44:03.980409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.264 [2024-11-28 06:44:03.981417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.264 [2024-11-28 06:44:03.981442] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:53.264 [2024-11-28 06:44:03.981450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.979 ms 00:18:53.264 [2024-11-28 06:44:03.981458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.264 [2024-11-28 06:44:03.982439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.264 [2024-11-28 06:44:03.982470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:53.264 [2024-11-28 06:44:03.982478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.934 ms 00:18:53.264 [2024-11-28 06:44:03.982485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.264 [2024-11-28 06:44:03.982510] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:53.264 [2024-11-28 06:44:03.982525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:18:53.264 [2024-11-28 06:44:03.982535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.982999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.983006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.983013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.983020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.983027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.983035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.983043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.983052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.983061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:53.264 [2024-11-28 06:44:03.983068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:53.265 [2024-11-28 06:44:03.983076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:53.265 [2024-11-28 06:44:03.983083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:53.265 [2024-11-28 06:44:03.983090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:53.265 [2024-11-28 06:44:03.983097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:53.265 [2024-11-28 06:44:03.983104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:53.265 [2024-11-28 06:44:03.983113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:53.265 [2024-11-28 06:44:03.983120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:53.265 [2024-11-28 06:44:03.983127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:53.265 [2024-11-28 06:44:03.983135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:53.265 [2024-11-28 06:44:03.983142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:53.265 [2024-11-28 06:44:03.983149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:53.265 [2024-11-28 06:44:03.983156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:53.265 [2024-11-28 06:44:03.983163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:53.265 [2024-11-28 06:44:03.983172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:53.265 [2024-11-28 06:44:03.983179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:53.265 [2024-11-28 06:44:03.983187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:53.265 [2024-11-28 06:44:03.983194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:53.265 [2024-11-28 06:44:03.983201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:53.265 [2024-11-28 06:44:03.983208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:53.265 [2024-11-28 06:44:03.983216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:53.265 [2024-11-28 06:44:03.983224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:53.265 [2024-11-28 06:44:03.983231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:53.265 [2024-11-28 06:44:03.983238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:53.265 [2024-11-28 06:44:03.983245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:53.265 [2024-11-28 06:44:03.983252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:53.265 [2024-11-28 06:44:03.983260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:53.265 [2024-11-28 06:44:03.983267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:53.265 [2024-11-28 06:44:03.983275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:53.265 [2024-11-28 06:44:03.983282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:53.265 [2024-11-28 06:44:03.983290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:53.265 [2024-11-28 06:44:03.983297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:53.265 [2024-11-28 06:44:03.983305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:53.265 [2024-11-28 06:44:03.983320] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:53.265 [2024-11-28 06:44:03.983332] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97e9e6bb-d5b7-4b0c-8aa2-bb5f5045248a 00:18:53.265 [2024-11-28 06:44:03.983340] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:18:53.265 [2024-11-28 06:44:03.983347] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 16320 00:18:53.265 [2024-11-28 06:44:03.983353] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 15360 00:18:53.265 [2024-11-28 06:44:03.983362] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0625 00:18:53.265 [2024-11-28 06:44:03.983375] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:53.265 [2024-11-28 06:44:03.983382] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:53.265 [2024-11-28 06:44:03.983390] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:53.265 [2024-11-28 06:44:03.983396] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:53.265 [2024-11-28 06:44:03.983402] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:53.265 [2024-11-28 06:44:03.983409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.265 [2024-11-28 06:44:03.983416] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:53.265 [2024-11-28 06:44:03.983423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.899 ms 00:18:53.265 [2024-11-28 06:44:03.983430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.265 [2024-11-28 06:44:03.985102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.265 [2024-11-28 06:44:03.985196] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:53.265 [2024-11-28 06:44:03.985257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.656 ms 00:18:53.265 [2024-11-28 06:44:03.985279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.265 [2024-11-28 06:44:03.985422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.265 [2024-11-28 06:44:03.985498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:53.265 [2024-11-28 06:44:03.985548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:53.265 [2024-11-28 06:44:03.985570] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.265 [2024-11-28 06:44:03.990489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.265 [2024-11-28 06:44:03.990602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:53.265 [2024-11-28 06:44:03.990657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.265 [2024-11-28 06:44:03.990682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.265 [2024-11-28 06:44:03.990780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.265 [2024-11-28 06:44:03.990812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:53.265 [2024-11-28 06:44:03.990902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.265 [2024-11-28 06:44:03.990925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.265 [2024-11-28 06:44:03.991007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.265 [2024-11-28 06:44:03.991094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:53.265 [2024-11-28 06:44:03.991129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.265 [2024-11-28 06:44:03.991150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.265 [2024-11-28 06:44:03.991208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.265 [2024-11-28 06:44:03.991238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:53.265 [2024-11-28 06:44:03.991258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.265 [2024-11-28 06:44:03.991277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.265 [2024-11-28 06:44:03.999352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.265 [2024-11-28 06:44:03.999484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:53.265 [2024-11-28 06:44:03.999563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.265 [2024-11-28 06:44:03.999678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.265 [2024-11-28 06:44:04.003239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.265 [2024-11-28 06:44:04.003343] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:53.265 [2024-11-28 06:44:04.003357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.265 [2024-11-28 06:44:04.003365] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.265 [2024-11-28 06:44:04.003413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.265 [2024-11-28 06:44:04.003422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:53.265 [2024-11-28 06:44:04.003430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.265 [2024-11-28 06:44:04.003440] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.265 [2024-11-28 06:44:04.003475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.265 [2024-11-28 06:44:04.003484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:53.265 [2024-11-28 06:44:04.003496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.265 [2024-11-28 06:44:04.003503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.265 [2024-11-28 06:44:04.003569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.265 [2024-11-28 06:44:04.003579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:53.265 [2024-11-28 06:44:04.003587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.265 [2024-11-28 06:44:04.003595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.265 [2024-11-28 06:44:04.003626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.265 [2024-11-28 06:44:04.003636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:53.265 [2024-11-28 06:44:04.003644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.265 [2024-11-28 06:44:04.003651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.265 [2024-11-28 06:44:04.003683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.265 [2024-11-28 06:44:04.003693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:53.265 [2024-11-28 06:44:04.003700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.265 [2024-11-28 06:44:04.003723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.265 [2024-11-28 06:44:04.003764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.265 [2024-11-28 06:44:04.003774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:53.265 [2024-11-28 06:44:04.003782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.265 [2024-11-28 06:44:04.003790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.266 [2024-11-28 06:44:04.003907] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 156.817 ms, result 0 00:18:53.524 00:18:53.524 00:18:53.524 06:44:04 -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:56.051 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:18:56.051 06:44:06 -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:18:56.051 06:44:06 -- ftl/restore.sh@85 -- # restore_kill 00:18:56.051 06:44:06 -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:56.051 06:44:06 -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:56.051 06:44:06 -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:56.051 Process with pid 83499 is not found 00:18:56.051 06:44:06 -- ftl/restore.sh@32 -- # killprocess 83499 00:18:56.051 06:44:06 -- common/autotest_common.sh@936 -- # '[' -z 83499 ']' 00:18:56.051 06:44:06 -- common/autotest_common.sh@940 -- # kill -0 83499 00:18:56.051 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (83499) - No such process 00:18:56.051 06:44:06 -- common/autotest_common.sh@963 -- # echo 'Process with pid 83499 is not found' 00:18:56.051 Remove shared memory files 00:18:56.051 06:44:06 -- ftl/restore.sh@33 -- # remove_shm 00:18:56.051 06:44:06 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:56.051 06:44:06 -- ftl/common.sh@205 -- # rm -f rm -f 00:18:56.051 06:44:06 -- ftl/common.sh@206 -- # rm -f rm -f 00:18:56.051 06:44:06 -- ftl/common.sh@207 -- # rm -f rm -f 00:18:56.051 06:44:06 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:56.051 06:44:06 -- ftl/common.sh@209 -- # rm -f rm -f 00:18:56.051 00:18:56.051 real 2m18.914s 00:18:56.051 user 2m8.460s 00:18:56.051 sys 0m11.381s 00:18:56.051 06:44:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:56.051 06:44:06 -- common/autotest_common.sh@10 -- # set +x 00:18:56.051 ************************************ 00:18:56.051 END TEST ftl_restore 00:18:56.051 ************************************ 00:18:56.051 06:44:06 -- ftl/ftl.sh@78 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:18:56.051 06:44:06 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:18:56.051 06:44:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:56.051 06:44:06 -- common/autotest_common.sh@10 -- # set +x 00:18:56.051 ************************************ 00:18:56.051 START TEST ftl_dirty_shutdown 00:18:56.051 ************************************ 00:18:56.051 06:44:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:18:56.051 * Looking for test storage... 00:18:56.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:56.051 06:44:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:56.051 06:44:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:56.051 06:44:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:56.051 06:44:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:56.051 06:44:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:56.051 06:44:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:56.051 06:44:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:56.051 06:44:06 -- scripts/common.sh@335 -- # IFS=.-: 00:18:56.051 06:44:06 -- scripts/common.sh@335 -- # read -ra ver1 00:18:56.051 06:44:06 -- scripts/common.sh@336 -- # IFS=.-: 00:18:56.051 06:44:06 -- scripts/common.sh@336 -- # read -ra ver2 00:18:56.051 06:44:06 -- scripts/common.sh@337 -- # local 'op=<' 00:18:56.051 06:44:06 -- scripts/common.sh@339 -- # ver1_l=2 00:18:56.051 06:44:06 -- scripts/common.sh@340 -- # ver2_l=1 00:18:56.051 06:44:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:56.051 06:44:06 -- scripts/common.sh@343 -- # case "$op" in 00:18:56.051 06:44:06 -- scripts/common.sh@344 -- # : 1 00:18:56.051 06:44:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:56.051 06:44:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:56.051 06:44:06 -- scripts/common.sh@364 -- # decimal 1 00:18:56.051 06:44:06 -- scripts/common.sh@352 -- # local d=1 00:18:56.051 06:44:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:56.051 06:44:06 -- scripts/common.sh@354 -- # echo 1 00:18:56.051 06:44:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:56.051 06:44:06 -- scripts/common.sh@365 -- # decimal 2 00:18:56.051 06:44:06 -- scripts/common.sh@352 -- # local d=2 00:18:56.051 06:44:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:56.051 06:44:06 -- scripts/common.sh@354 -- # echo 2 00:18:56.051 06:44:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:56.051 06:44:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:56.051 06:44:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:56.051 06:44:06 -- scripts/common.sh@367 -- # return 0 00:18:56.051 06:44:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:56.051 06:44:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:56.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.051 --rc genhtml_branch_coverage=1 00:18:56.051 --rc genhtml_function_coverage=1 00:18:56.051 --rc genhtml_legend=1 00:18:56.051 --rc geninfo_all_blocks=1 00:18:56.051 --rc geninfo_unexecuted_blocks=1 00:18:56.051 00:18:56.051 ' 00:18:56.051 06:44:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:56.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.051 --rc genhtml_branch_coverage=1 00:18:56.051 --rc genhtml_function_coverage=1 00:18:56.051 --rc genhtml_legend=1 00:18:56.051 --rc geninfo_all_blocks=1 00:18:56.051 --rc geninfo_unexecuted_blocks=1 00:18:56.051 00:18:56.051 ' 00:18:56.051 06:44:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:56.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.051 --rc genhtml_branch_coverage=1 00:18:56.051 --rc genhtml_function_coverage=1 00:18:56.051 --rc genhtml_legend=1 00:18:56.051 --rc geninfo_all_blocks=1 00:18:56.051 --rc geninfo_unexecuted_blocks=1 00:18:56.051 00:18:56.051 ' 00:18:56.051 06:44:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:56.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.051 --rc genhtml_branch_coverage=1 00:18:56.051 --rc genhtml_function_coverage=1 00:18:56.051 --rc genhtml_legend=1 00:18:56.051 --rc geninfo_all_blocks=1 00:18:56.051 --rc geninfo_unexecuted_blocks=1 00:18:56.051 00:18:56.051 ' 00:18:56.051 06:44:06 -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:56.051 06:44:06 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:18:56.051 06:44:06 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:56.051 06:44:06 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:56.051 06:44:06 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:56.051 06:44:06 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:56.051 06:44:06 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:56.051 06:44:06 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:56.051 06:44:06 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:56.051 06:44:06 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:56.051 06:44:06 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:56.051 06:44:06 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:56.051 06:44:06 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:56.051 06:44:06 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:56.051 06:44:06 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:56.051 06:44:06 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:56.051 06:44:06 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:56.051 06:44:06 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:56.052 06:44:06 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:56.052 06:44:06 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:56.052 06:44:06 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:56.052 06:44:06 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:56.052 06:44:06 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:56.052 06:44:06 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:56.052 06:44:06 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:56.052 06:44:06 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:56.052 06:44:06 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:56.052 06:44:06 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:56.052 06:44:06 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:56.052 06:44:06 -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:56.052 06:44:06 -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:56.052 06:44:06 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:18:56.052 06:44:06 -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:18:56.052 06:44:06 -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:06.0 00:18:56.052 06:44:06 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:18:56.052 06:44:06 -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:18:56.052 06:44:06 -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:07.0 00:18:56.052 06:44:06 -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:18:56.052 06:44:06 -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:18:56.052 06:44:06 -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:18:56.052 06:44:06 -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:18:56.052 06:44:06 -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:56.052 06:44:06 -- ftl/dirty_shutdown.sh@45 -- # svcpid=85063 00:18:56.052 06:44:06 -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 85063 00:18:56.052 06:44:06 -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:18:56.052 06:44:06 -- common/autotest_common.sh@829 -- # '[' -z 85063 ']' 00:18:56.052 06:44:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.052 06:44:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:56.052 06:44:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.052 06:44:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:56.052 06:44:06 -- common/autotest_common.sh@10 -- # set +x 00:18:56.052 [2024-11-28 06:44:06.623661] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:56.052 [2024-11-28 06:44:06.624391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85063 ] 00:18:56.052 [2024-11-28 06:44:06.758020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.052 [2024-11-28 06:44:06.788120] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:56.052 [2024-11-28 06:44:06.788473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.986 06:44:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:56.986 06:44:07 -- common/autotest_common.sh@862 -- # return 0 00:18:56.986 06:44:07 -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:18:56.986 06:44:07 -- ftl/common.sh@54 -- # local name=nvme0 00:18:56.986 06:44:07 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:18:56.986 06:44:07 -- ftl/common.sh@56 -- # local size=103424 00:18:56.986 06:44:07 -- ftl/common.sh@59 -- # local base_bdev 00:18:56.986 06:44:07 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:18:56.986 06:44:07 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:56.986 06:44:07 -- ftl/common.sh@62 -- # local base_size 00:18:56.986 06:44:07 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:56.986 06:44:07 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:18:56.986 06:44:07 -- common/autotest_common.sh@1368 -- # local bdev_info 00:18:56.986 06:44:07 -- common/autotest_common.sh@1369 -- # local bs 00:18:56.986 06:44:07 -- common/autotest_common.sh@1370 -- # local nb 00:18:56.986 06:44:07 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:57.245 06:44:07 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:18:57.245 { 00:18:57.245 "name": "nvme0n1", 00:18:57.245 "aliases": [ 00:18:57.245 "1ba88b84-1b88-4d58-a09c-fd696db76446" 00:18:57.245 ], 00:18:57.245 "product_name": "NVMe disk", 00:18:57.245 "block_size": 4096, 00:18:57.245 "num_blocks": 1310720, 00:18:57.245 "uuid": "1ba88b84-1b88-4d58-a09c-fd696db76446", 00:18:57.245 "assigned_rate_limits": { 00:18:57.245 "rw_ios_per_sec": 0, 00:18:57.245 "rw_mbytes_per_sec": 0, 00:18:57.245 "r_mbytes_per_sec": 0, 00:18:57.245 "w_mbytes_per_sec": 0 00:18:57.245 }, 00:18:57.245 "claimed": true, 00:18:57.245 "claim_type": "read_many_write_one", 00:18:57.245 "zoned": false, 00:18:57.245 "supported_io_types": { 00:18:57.245 "read": true, 00:18:57.245 "write": true, 00:18:57.245 "unmap": true, 00:18:57.245 "write_zeroes": true, 00:18:57.245 "flush": true, 00:18:57.245 "reset": true, 00:18:57.245 "compare": true, 00:18:57.245 "compare_and_write": false, 00:18:57.245 "abort": true, 00:18:57.245 "nvme_admin": true, 00:18:57.245 "nvme_io": true 00:18:57.245 }, 00:18:57.245 "driver_specific": { 00:18:57.245 "nvme": [ 00:18:57.245 { 00:18:57.245 "pci_address": "0000:00:07.0", 00:18:57.245 "trid": { 00:18:57.245 "trtype": "PCIe", 00:18:57.245 "traddr": "0000:00:07.0" 00:18:57.245 }, 00:18:57.245 "ctrlr_data": { 00:18:57.245 "cntlid": 0, 00:18:57.245 "vendor_id": "0x1b36", 00:18:57.245 "model_number": "QEMU NVMe Ctrl", 00:18:57.245 "serial_number": "12341", 00:18:57.245 "firmware_revision": "8.0.0", 00:18:57.245 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:57.245 "oacs": { 00:18:57.245 "security": 0, 00:18:57.245 "format": 1, 00:18:57.245 "firmware": 0, 00:18:57.245 "ns_manage": 1 00:18:57.245 }, 00:18:57.245 "multi_ctrlr": false, 00:18:57.245 "ana_reporting": false 00:18:57.245 }, 00:18:57.245 "vs": { 00:18:57.245 "nvme_version": "1.4" 00:18:57.245 }, 00:18:57.245 "ns_data": { 00:18:57.245 "id": 1, 00:18:57.245 "can_share": false 00:18:57.245 } 00:18:57.245 } 00:18:57.245 ], 00:18:57.245 "mp_policy": "active_passive" 00:18:57.245 } 00:18:57.245 } 00:18:57.245 ]' 00:18:57.245 06:44:07 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:18:57.245 06:44:07 -- common/autotest_common.sh@1372 -- # bs=4096 00:18:57.245 06:44:07 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:18:57.245 06:44:07 -- common/autotest_common.sh@1373 -- # nb=1310720 00:18:57.245 06:44:07 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:18:57.245 06:44:07 -- common/autotest_common.sh@1377 -- # echo 5120 00:18:57.245 06:44:07 -- ftl/common.sh@63 -- # base_size=5120 00:18:57.245 06:44:07 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:57.245 06:44:07 -- ftl/common.sh@67 -- # clear_lvols 00:18:57.245 06:44:07 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:57.245 06:44:07 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:57.504 06:44:08 -- ftl/common.sh@28 -- # stores=6eb48a59-6d7e-40da-aaca-677fab2b140e 00:18:57.504 06:44:08 -- ftl/common.sh@29 -- # for lvs in $stores 00:18:57.504 06:44:08 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6eb48a59-6d7e-40da-aaca-677fab2b140e 00:18:57.504 06:44:08 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:57.763 06:44:08 -- ftl/common.sh@68 -- # lvs=5fd135b4-266b-4e37-ac15-4f127906ec97 00:18:57.763 06:44:08 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5fd135b4-266b-4e37-ac15-4f127906ec97 00:18:58.022 06:44:08 -- ftl/dirty_shutdown.sh@49 -- # split_bdev=2ffc002d-0d53-42a8-8dc8-0ba8e83851db 00:18:58.022 06:44:08 -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:06.0 ']' 00:18:58.022 06:44:08 -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:06.0 2ffc002d-0d53-42a8-8dc8-0ba8e83851db 00:18:58.022 06:44:08 -- ftl/common.sh@35 -- # local name=nvc0 00:18:58.022 06:44:08 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:18:58.022 06:44:08 -- ftl/common.sh@37 -- # local base_bdev=2ffc002d-0d53-42a8-8dc8-0ba8e83851db 00:18:58.022 06:44:08 -- ftl/common.sh@38 -- # local cache_size= 00:18:58.022 06:44:08 -- ftl/common.sh@41 -- # get_bdev_size 2ffc002d-0d53-42a8-8dc8-0ba8e83851db 00:18:58.022 06:44:08 -- common/autotest_common.sh@1367 -- # local bdev_name=2ffc002d-0d53-42a8-8dc8-0ba8e83851db 00:18:58.022 06:44:08 -- common/autotest_common.sh@1368 -- # local bdev_info 00:18:58.022 06:44:08 -- common/autotest_common.sh@1369 -- # local bs 00:18:58.022 06:44:08 -- common/autotest_common.sh@1370 -- # local nb 00:18:58.022 06:44:08 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2ffc002d-0d53-42a8-8dc8-0ba8e83851db 00:18:58.022 06:44:08 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:18:58.022 { 00:18:58.022 "name": "2ffc002d-0d53-42a8-8dc8-0ba8e83851db", 00:18:58.022 "aliases": [ 00:18:58.022 "lvs/nvme0n1p0" 00:18:58.022 ], 00:18:58.022 "product_name": "Logical Volume", 00:18:58.022 "block_size": 4096, 00:18:58.022 "num_blocks": 26476544, 00:18:58.022 "uuid": "2ffc002d-0d53-42a8-8dc8-0ba8e83851db", 00:18:58.022 "assigned_rate_limits": { 00:18:58.022 "rw_ios_per_sec": 0, 00:18:58.022 "rw_mbytes_per_sec": 0, 00:18:58.022 "r_mbytes_per_sec": 0, 00:18:58.022 "w_mbytes_per_sec": 0 00:18:58.022 }, 00:18:58.022 "claimed": false, 00:18:58.022 "zoned": false, 00:18:58.022 "supported_io_types": { 00:18:58.022 "read": true, 00:18:58.022 "write": true, 00:18:58.022 "unmap": true, 00:18:58.022 "write_zeroes": true, 00:18:58.022 "flush": false, 00:18:58.022 "reset": true, 00:18:58.022 "compare": false, 00:18:58.022 "compare_and_write": false, 00:18:58.022 "abort": false, 00:18:58.022 "nvme_admin": false, 00:18:58.022 "nvme_io": false 00:18:58.022 }, 00:18:58.022 "driver_specific": { 00:18:58.022 "lvol": { 00:18:58.022 "lvol_store_uuid": "5fd135b4-266b-4e37-ac15-4f127906ec97", 00:18:58.022 "base_bdev": "nvme0n1", 00:18:58.022 "thin_provision": true, 00:18:58.022 "snapshot": false, 00:18:58.022 "clone": false, 00:18:58.022 "esnap_clone": false 00:18:58.022 } 00:18:58.022 } 00:18:58.022 } 00:18:58.022 ]' 00:18:58.022 06:44:08 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:18:58.022 06:44:08 -- common/autotest_common.sh@1372 -- # bs=4096 00:18:58.022 06:44:08 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:18:58.282 06:44:08 -- common/autotest_common.sh@1373 -- # nb=26476544 00:18:58.282 06:44:08 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:18:58.282 06:44:08 -- common/autotest_common.sh@1377 -- # echo 103424 00:18:58.282 06:44:08 -- ftl/common.sh@41 -- # local base_size=5171 00:18:58.282 06:44:08 -- ftl/common.sh@44 -- # local nvc_bdev 00:18:58.282 06:44:08 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:18:58.282 06:44:09 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:58.282 06:44:09 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:58.282 06:44:09 -- ftl/common.sh@48 -- # get_bdev_size 2ffc002d-0d53-42a8-8dc8-0ba8e83851db 00:18:58.282 06:44:09 -- common/autotest_common.sh@1367 -- # local bdev_name=2ffc002d-0d53-42a8-8dc8-0ba8e83851db 00:18:58.282 06:44:09 -- common/autotest_common.sh@1368 -- # local bdev_info 00:18:58.282 06:44:09 -- common/autotest_common.sh@1369 -- # local bs 00:18:58.282 06:44:09 -- common/autotest_common.sh@1370 -- # local nb 00:18:58.282 06:44:09 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2ffc002d-0d53-42a8-8dc8-0ba8e83851db 00:18:58.540 06:44:09 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:18:58.540 { 00:18:58.540 "name": "2ffc002d-0d53-42a8-8dc8-0ba8e83851db", 00:18:58.540 "aliases": [ 00:18:58.540 "lvs/nvme0n1p0" 00:18:58.540 ], 00:18:58.540 "product_name": "Logical Volume", 00:18:58.540 "block_size": 4096, 00:18:58.540 "num_blocks": 26476544, 00:18:58.540 "uuid": "2ffc002d-0d53-42a8-8dc8-0ba8e83851db", 00:18:58.540 "assigned_rate_limits": { 00:18:58.540 "rw_ios_per_sec": 0, 00:18:58.540 "rw_mbytes_per_sec": 0, 00:18:58.540 "r_mbytes_per_sec": 0, 00:18:58.540 "w_mbytes_per_sec": 0 00:18:58.540 }, 00:18:58.540 "claimed": false, 00:18:58.540 "zoned": false, 00:18:58.540 "supported_io_types": { 00:18:58.540 "read": true, 00:18:58.540 "write": true, 00:18:58.541 "unmap": true, 00:18:58.541 "write_zeroes": true, 00:18:58.541 "flush": false, 00:18:58.541 "reset": true, 00:18:58.541 "compare": false, 00:18:58.541 "compare_and_write": false, 00:18:58.541 "abort": false, 00:18:58.541 "nvme_admin": false, 00:18:58.541 "nvme_io": false 00:18:58.541 }, 00:18:58.541 "driver_specific": { 00:18:58.541 "lvol": { 00:18:58.541 "lvol_store_uuid": "5fd135b4-266b-4e37-ac15-4f127906ec97", 00:18:58.541 "base_bdev": "nvme0n1", 00:18:58.541 "thin_provision": true, 00:18:58.541 "snapshot": false, 00:18:58.541 "clone": false, 00:18:58.541 "esnap_clone": false 00:18:58.541 } 00:18:58.541 } 00:18:58.541 } 00:18:58.541 ]' 00:18:58.541 06:44:09 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:18:58.541 06:44:09 -- common/autotest_common.sh@1372 -- # bs=4096 00:18:58.541 06:44:09 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:18:58.541 06:44:09 -- common/autotest_common.sh@1373 -- # nb=26476544 00:18:58.541 06:44:09 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:18:58.541 06:44:09 -- common/autotest_common.sh@1377 -- # echo 103424 00:18:58.541 06:44:09 -- ftl/common.sh@48 -- # cache_size=5171 00:18:58.541 06:44:09 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:58.799 06:44:09 -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:18:58.799 06:44:09 -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 2ffc002d-0d53-42a8-8dc8-0ba8e83851db 00:18:58.799 06:44:09 -- common/autotest_common.sh@1367 -- # local bdev_name=2ffc002d-0d53-42a8-8dc8-0ba8e83851db 00:18:58.799 06:44:09 -- common/autotest_common.sh@1368 -- # local bdev_info 00:18:58.799 06:44:09 -- common/autotest_common.sh@1369 -- # local bs 00:18:58.799 06:44:09 -- common/autotest_common.sh@1370 -- # local nb 00:18:58.799 06:44:09 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2ffc002d-0d53-42a8-8dc8-0ba8e83851db 00:18:59.058 06:44:09 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:18:59.058 { 00:18:59.058 "name": "2ffc002d-0d53-42a8-8dc8-0ba8e83851db", 00:18:59.058 "aliases": [ 00:18:59.058 "lvs/nvme0n1p0" 00:18:59.058 ], 00:18:59.058 "product_name": "Logical Volume", 00:18:59.058 "block_size": 4096, 00:18:59.058 "num_blocks": 26476544, 00:18:59.058 "uuid": "2ffc002d-0d53-42a8-8dc8-0ba8e83851db", 00:18:59.058 "assigned_rate_limits": { 00:18:59.058 "rw_ios_per_sec": 0, 00:18:59.058 "rw_mbytes_per_sec": 0, 00:18:59.058 "r_mbytes_per_sec": 0, 00:18:59.058 "w_mbytes_per_sec": 0 00:18:59.058 }, 00:18:59.058 "claimed": false, 00:18:59.058 "zoned": false, 00:18:59.058 "supported_io_types": { 00:18:59.058 "read": true, 00:18:59.058 "write": true, 00:18:59.058 "unmap": true, 00:18:59.058 "write_zeroes": true, 00:18:59.058 "flush": false, 00:18:59.058 "reset": true, 00:18:59.058 "compare": false, 00:18:59.058 "compare_and_write": false, 00:18:59.058 "abort": false, 00:18:59.058 "nvme_admin": false, 00:18:59.058 "nvme_io": false 00:18:59.058 }, 00:18:59.058 "driver_specific": { 00:18:59.058 "lvol": { 00:18:59.058 "lvol_store_uuid": "5fd135b4-266b-4e37-ac15-4f127906ec97", 00:18:59.058 "base_bdev": "nvme0n1", 00:18:59.058 "thin_provision": true, 00:18:59.058 "snapshot": false, 00:18:59.058 "clone": false, 00:18:59.058 "esnap_clone": false 00:18:59.058 } 00:18:59.058 } 00:18:59.058 } 00:18:59.058 ]' 00:18:59.058 06:44:09 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:18:59.058 06:44:09 -- common/autotest_common.sh@1372 -- # bs=4096 00:18:59.058 06:44:09 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:18:59.058 06:44:09 -- common/autotest_common.sh@1373 -- # nb=26476544 00:18:59.058 06:44:09 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:18:59.058 06:44:09 -- common/autotest_common.sh@1377 -- # echo 103424 00:18:59.058 06:44:09 -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:18:59.058 06:44:09 -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 2ffc002d-0d53-42a8-8dc8-0ba8e83851db --l2p_dram_limit 10' 00:18:59.058 06:44:09 -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:18:59.058 06:44:09 -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:06.0 ']' 00:18:59.058 06:44:09 -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:59.058 06:44:09 -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2ffc002d-0d53-42a8-8dc8-0ba8e83851db --l2p_dram_limit 10 -c nvc0n1p0 00:18:59.317 [2024-11-28 06:44:09.893589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.317 [2024-11-28 06:44:09.893628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:59.317 [2024-11-28 06:44:09.893640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:59.317 [2024-11-28 06:44:09.893646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.317 [2024-11-28 06:44:09.893691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.317 [2024-11-28 06:44:09.893702] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:59.317 [2024-11-28 06:44:09.893728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:59.317 [2024-11-28 06:44:09.893733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.317 [2024-11-28 06:44:09.893753] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:59.317 [2024-11-28 06:44:09.893970] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:59.317 [2024-11-28 06:44:09.893983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.317 [2024-11-28 06:44:09.893988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:59.317 [2024-11-28 06:44:09.893997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:18:59.317 [2024-11-28 06:44:09.894002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.317 [2024-11-28 06:44:09.894026] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9c3e096a-f90c-4ce9-b0ab-baccc446d2b8 00:18:59.317 [2024-11-28 06:44:09.894976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.317 [2024-11-28 06:44:09.894993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:59.317 [2024-11-28 06:44:09.895001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:59.317 [2024-11-28 06:44:09.895008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.317 [2024-11-28 06:44:09.899714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.317 [2024-11-28 06:44:09.899810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:59.317 [2024-11-28 06:44:09.899853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.649 ms 00:18:59.317 [2024-11-28 06:44:09.899876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.317 [2024-11-28 06:44:09.899951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.317 [2024-11-28 06:44:09.899974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:59.317 [2024-11-28 06:44:09.900002] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:18:59.317 [2024-11-28 06:44:09.900022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.317 [2024-11-28 06:44:09.900071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.317 [2024-11-28 06:44:09.900092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:59.317 [2024-11-28 06:44:09.900109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:59.317 [2024-11-28 06:44:09.900163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.317 [2024-11-28 06:44:09.900195] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:59.317 [2024-11-28 06:44:09.901459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.317 [2024-11-28 06:44:09.901540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:59.317 [2024-11-28 06:44:09.901581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.269 ms 00:18:59.317 [2024-11-28 06:44:09.901598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.317 [2024-11-28 06:44:09.901769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.317 [2024-11-28 06:44:09.901795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:59.317 [2024-11-28 06:44:09.901873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:59.317 [2024-11-28 06:44:09.901890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.317 [2024-11-28 06:44:09.901914] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:59.317 [2024-11-28 06:44:09.902012] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:59.317 [2024-11-28 06:44:09.902041] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:59.317 [2024-11-28 06:44:09.902093] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:59.317 [2024-11-28 06:44:09.902125] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:59.317 [2024-11-28 06:44:09.902149] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:59.317 [2024-11-28 06:44:09.902282] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:59.317 [2024-11-28 06:44:09.902300] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:59.317 [2024-11-28 06:44:09.902317] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:59.317 [2024-11-28 06:44:09.902440] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:59.317 [2024-11-28 06:44:09.902472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.317 [2024-11-28 06:44:09.902487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:59.317 [2024-11-28 06:44:09.902553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:18:59.317 [2024-11-28 06:44:09.902570] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.317 [2024-11-28 06:44:09.902637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.317 [2024-11-28 06:44:09.902683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:59.317 [2024-11-28 06:44:09.902711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:59.317 [2024-11-28 06:44:09.902750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.317 [2024-11-28 06:44:09.902819] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:59.317 [2024-11-28 06:44:09.902836] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:59.317 [2024-11-28 06:44:09.903196] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:59.317 [2024-11-28 06:44:09.903490] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.317 [2024-11-28 06:44:09.903672] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:59.318 [2024-11-28 06:44:09.903740] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:59.318 [2024-11-28 06:44:09.903772] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:59.318 [2024-11-28 06:44:09.903795] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:59.318 [2024-11-28 06:44:09.903821] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:59.318 [2024-11-28 06:44:09.903842] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:59.318 [2024-11-28 06:44:09.903867] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:59.318 [2024-11-28 06:44:09.903888] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:59.318 [2024-11-28 06:44:09.903918] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:59.318 [2024-11-28 06:44:09.903940] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:59.318 [2024-11-28 06:44:09.903965] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:59.318 [2024-11-28 06:44:09.903984] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.318 [2024-11-28 06:44:09.904009] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:59.318 [2024-11-28 06:44:09.904030] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:59.318 [2024-11-28 06:44:09.904054] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.318 [2024-11-28 06:44:09.904075] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:59.318 [2024-11-28 06:44:09.904099] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:59.318 [2024-11-28 06:44:09.904120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:59.318 [2024-11-28 06:44:09.904146] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:59.318 [2024-11-28 06:44:09.904166] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:59.318 [2024-11-28 06:44:09.904190] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:59.318 [2024-11-28 06:44:09.904214] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:59.318 [2024-11-28 06:44:09.904242] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:59.318 [2024-11-28 06:44:09.904266] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:59.318 [2024-11-28 06:44:09.904303] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:59.318 [2024-11-28 06:44:09.904388] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:59.318 [2024-11-28 06:44:09.904420] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:59.318 [2024-11-28 06:44:09.904444] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:59.318 [2024-11-28 06:44:09.904472] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:59.318 [2024-11-28 06:44:09.904495] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:59.318 [2024-11-28 06:44:09.904524] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:59.318 [2024-11-28 06:44:09.904548] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:59.318 [2024-11-28 06:44:09.904576] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:59.318 [2024-11-28 06:44:09.904597] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:59.318 [2024-11-28 06:44:09.904621] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:59.318 [2024-11-28 06:44:09.904642] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:59.318 [2024-11-28 06:44:09.904665] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:59.318 [2024-11-28 06:44:09.904696] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:59.318 [2024-11-28 06:44:09.905040] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:59.318 [2024-11-28 06:44:09.905170] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.318 [2024-11-28 06:44:09.905250] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:59.318 [2024-11-28 06:44:09.905314] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:59.318 [2024-11-28 06:44:09.905441] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:59.318 [2024-11-28 06:44:09.905580] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:59.318 [2024-11-28 06:44:09.905738] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:59.318 [2024-11-28 06:44:09.905873] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:59.318 [2024-11-28 06:44:09.906009] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:59.318 [2024-11-28 06:44:09.906184] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:59.318 [2024-11-28 06:44:09.906334] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:59.318 [2024-11-28 06:44:09.906362] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:59.318 [2024-11-28 06:44:09.906388] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:59.318 [2024-11-28 06:44:09.906411] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:59.318 [2024-11-28 06:44:09.906438] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:59.318 [2024-11-28 06:44:09.906460] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:59.318 [2024-11-28 06:44:09.906486] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:59.318 [2024-11-28 06:44:09.906508] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:59.318 [2024-11-28 06:44:09.906538] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:59.318 [2024-11-28 06:44:09.906561] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:59.318 [2024-11-28 06:44:09.906587] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:59.318 [2024-11-28 06:44:09.906609] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:59.318 [2024-11-28 06:44:09.906637] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:59.318 [2024-11-28 06:44:09.906658] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:59.318 [2024-11-28 06:44:09.906687] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:59.318 [2024-11-28 06:44:09.906736] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:59.318 [2024-11-28 06:44:09.906764] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:59.318 [2024-11-28 06:44:09.906787] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:59.318 [2024-11-28 06:44:09.906813] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:59.318 [2024-11-28 06:44:09.906842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.318 [2024-11-28 06:44:09.906870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:59.318 [2024-11-28 06:44:09.906894] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.056 ms 00:18:59.318 [2024-11-28 06:44:09.906929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.318 [2024-11-28 06:44:09.917380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.318 [2024-11-28 06:44:09.917506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:59.318 [2024-11-28 06:44:09.917521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.241 ms 00:18:59.318 [2024-11-28 06:44:09.917530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.318 [2024-11-28 06:44:09.917617] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.318 [2024-11-28 06:44:09.917627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:59.318 [2024-11-28 06:44:09.917635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:59.318 [2024-11-28 06:44:09.917648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.318 [2024-11-28 06:44:09.926204] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.318 [2024-11-28 06:44:09.926242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:59.318 [2024-11-28 06:44:09.926252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.513 ms 00:18:59.318 [2024-11-28 06:44:09.926264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.318 [2024-11-28 06:44:09.926296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.318 [2024-11-28 06:44:09.926306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:59.318 [2024-11-28 06:44:09.926313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:59.318 [2024-11-28 06:44:09.926322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.318 [2024-11-28 06:44:09.926634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.318 [2024-11-28 06:44:09.926659] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:59.318 [2024-11-28 06:44:09.926668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:18:59.318 [2024-11-28 06:44:09.926677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.318 [2024-11-28 06:44:09.926794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.318 [2024-11-28 06:44:09.926808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:59.318 [2024-11-28 06:44:09.926816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:18:59.318 [2024-11-28 06:44:09.926828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.318 [2024-11-28 06:44:09.932085] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.318 [2024-11-28 06:44:09.932198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:59.318 [2024-11-28 06:44:09.932212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.239 ms 00:18:59.318 [2024-11-28 06:44:09.932221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.318 [2024-11-28 06:44:09.940455] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:59.318 [2024-11-28 06:44:09.943126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.318 [2024-11-28 06:44:09.943240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:59.319 [2024-11-28 06:44:09.943257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.834 ms 00:18:59.319 [2024-11-28 06:44:09.943264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.319 [2024-11-28 06:44:09.999949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.319 [2024-11-28 06:44:09.999992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:59.319 [2024-11-28 06:44:10.000006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.654 ms 00:18:59.319 [2024-11-28 06:44:10.000014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.319 [2024-11-28 06:44:10.000051] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:18:59.319 [2024-11-28 06:44:10.000063] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:19:02.602 [2024-11-28 06:44:12.761699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.602 [2024-11-28 06:44:12.761774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:02.602 [2024-11-28 06:44:12.761791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2761.628 ms 00:19:02.602 [2024-11-28 06:44:12.761805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.602 [2024-11-28 06:44:12.761979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.602 [2024-11-28 06:44:12.761989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:02.602 [2024-11-28 06:44:12.761999] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:19:02.602 [2024-11-28 06:44:12.762007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.602 [2024-11-28 06:44:12.766072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.602 [2024-11-28 06:44:12.766209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:02.602 [2024-11-28 06:44:12.766232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.042 ms 00:19:02.602 [2024-11-28 06:44:12.766240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.602 [2024-11-28 06:44:12.769580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.602 [2024-11-28 06:44:12.769715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:02.602 [2024-11-28 06:44:12.769735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.306 ms 00:19:02.602 [2024-11-28 06:44:12.769742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.602 [2024-11-28 06:44:12.769904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.602 [2024-11-28 06:44:12.769912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:02.602 [2024-11-28 06:44:12.769922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:19:02.602 [2024-11-28 06:44:12.769929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.602 [2024-11-28 06:44:12.791915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.602 [2024-11-28 06:44:12.791957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:02.602 [2024-11-28 06:44:12.791969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.964 ms 00:19:02.602 [2024-11-28 06:44:12.791976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.602 [2024-11-28 06:44:12.796752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.602 [2024-11-28 06:44:12.796874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:02.602 [2024-11-28 06:44:12.796897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.732 ms 00:19:02.602 [2024-11-28 06:44:12.796905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.602 [2024-11-28 06:44:12.798147] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.602 [2024-11-28 06:44:12.798176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:02.602 [2024-11-28 06:44:12.798187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.175 ms 00:19:02.602 [2024-11-28 06:44:12.798194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.602 [2024-11-28 06:44:12.802404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.602 [2024-11-28 06:44:12.802439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:02.602 [2024-11-28 06:44:12.802450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.185 ms 00:19:02.602 [2024-11-28 06:44:12.802457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.602 [2024-11-28 06:44:12.802494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.602 [2024-11-28 06:44:12.802504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:02.602 [2024-11-28 06:44:12.802516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:02.603 [2024-11-28 06:44:12.802523] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.603 [2024-11-28 06:44:12.802591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.603 [2024-11-28 06:44:12.802600] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:02.603 [2024-11-28 06:44:12.802612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:02.603 [2024-11-28 06:44:12.802619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.603 [2024-11-28 06:44:12.803506] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2909.515 ms, result 0 00:19:02.603 { 00:19:02.603 "name": "ftl0", 00:19:02.603 "uuid": "9c3e096a-f90c-4ce9-b0ab-baccc446d2b8" 00:19:02.603 } 00:19:02.603 06:44:12 -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:19:02.603 06:44:12 -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:02.603 06:44:13 -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:19:02.603 06:44:13 -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:19:02.603 06:44:13 -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:19:02.603 /dev/nbd0 00:19:02.603 06:44:13 -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:19:02.603 06:44:13 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:19:02.603 06:44:13 -- common/autotest_common.sh@867 -- # local i 00:19:02.603 06:44:13 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:02.603 06:44:13 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:02.603 06:44:13 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:19:02.603 06:44:13 -- common/autotest_common.sh@871 -- # break 00:19:02.603 06:44:13 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:02.603 06:44:13 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:02.603 06:44:13 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:19:02.603 1+0 records in 00:19:02.603 1+0 records out 00:19:02.603 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000813819 s, 5.0 MB/s 00:19:02.603 06:44:13 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:19:02.603 06:44:13 -- common/autotest_common.sh@884 -- # size=4096 00:19:02.603 06:44:13 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:19:02.603 06:44:13 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:02.603 06:44:13 -- common/autotest_common.sh@887 -- # return 0 00:19:02.603 06:44:13 -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:19:02.603 [2024-11-28 06:44:13.277024] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:02.603 [2024-11-28 06:44:13.277121] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85188 ] 00:19:02.908 [2024-11-28 06:44:13.411435] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.908 [2024-11-28 06:44:13.441829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.875  [2024-11-28T06:44:15.581Z] Copying: 195/1024 [MB] (195 MBps) [2024-11-28T06:44:16.516Z] Copying: 392/1024 [MB] (196 MBps) [2024-11-28T06:44:17.892Z] Copying: 647/1024 [MB] (254 MBps) [2024-11-28T06:44:18.151Z] Copying: 906/1024 [MB] (258 MBps) [2024-11-28T06:44:18.151Z] Copying: 1024/1024 [MB] (average 229 MBps) 00:19:07.381 00:19:07.381 06:44:18 -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:19:09.912 06:44:20 -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:19:09.912 [2024-11-28 06:44:20.117276] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:09.912 [2024-11-28 06:44:20.117516] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85265 ] 00:19:09.912 [2024-11-28 06:44:20.252005] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.912 [2024-11-28 06:44:20.281431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:10.848  [2024-11-28T06:44:22.552Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-28T06:44:23.486Z] Copying: 40/1024 [MB] (20 MBps) [2024-11-28T06:44:24.420Z] Copying: 71/1024 [MB] (30 MBps) [2024-11-28T06:44:25.355Z] Copying: 102/1024 [MB] (30 MBps) [2024-11-28T06:44:26.730Z] Copying: 134/1024 [MB] (31 MBps) [2024-11-28T06:44:27.666Z] Copying: 168/1024 [MB] (34 MBps) [2024-11-28T06:44:28.603Z] Copying: 201/1024 [MB] (32 MBps) [2024-11-28T06:44:29.537Z] Copying: 232/1024 [MB] (31 MBps) [2024-11-28T06:44:30.472Z] Copying: 262/1024 [MB] (30 MBps) [2024-11-28T06:44:31.406Z] Copying: 295/1024 [MB] (32 MBps) [2024-11-28T06:44:32.341Z] Copying: 326/1024 [MB] (31 MBps) [2024-11-28T06:44:33.718Z] Copying: 357/1024 [MB] (30 MBps) [2024-11-28T06:44:34.654Z] Copying: 387/1024 [MB] (30 MBps) [2024-11-28T06:44:35.589Z] Copying: 417/1024 [MB] (30 MBps) [2024-11-28T06:44:36.525Z] Copying: 444/1024 [MB] (27 MBps) [2024-11-28T06:44:37.461Z] Copying: 478/1024 [MB] (33 MBps) [2024-11-28T06:44:38.397Z] Copying: 509/1024 [MB] (31 MBps) [2024-11-28T06:44:39.356Z] Copying: 540/1024 [MB] (31 MBps) [2024-11-28T06:44:40.746Z] Copying: 571/1024 [MB] (30 MBps) [2024-11-28T06:44:41.682Z] Copying: 604/1024 [MB] (33 MBps) [2024-11-28T06:44:42.617Z] Copying: 639/1024 [MB] (35 MBps) [2024-11-28T06:44:43.553Z] Copying: 672/1024 [MB] (32 MBps) [2024-11-28T06:44:44.488Z] Copying: 708/1024 [MB] (36 MBps) [2024-11-28T06:44:45.423Z] Copying: 739/1024 [MB] (30 MBps) [2024-11-28T06:44:46.357Z] Copying: 769/1024 [MB] (30 MBps) [2024-11-28T06:44:47.731Z] Copying: 804/1024 [MB] (34 MBps) [2024-11-28T06:44:48.666Z] Copying: 836/1024 [MB] (31 MBps) [2024-11-28T06:44:49.601Z] Copying: 868/1024 [MB] (32 MBps) [2024-11-28T06:44:50.554Z] Copying: 902/1024 [MB] (34 MBps) [2024-11-28T06:44:51.488Z] Copying: 936/1024 [MB] (33 MBps) [2024-11-28T06:44:52.424Z] Copying: 968/1024 [MB] (32 MBps) [2024-11-28T06:44:52.990Z] Copying: 1006/1024 [MB] (37 MBps) [2024-11-28T06:44:53.248Z] Copying: 1024/1024 [MB] (average 31 MBps) 00:19:42.478 00:19:42.478 06:44:53 -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:19:42.478 06:44:53 -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:19:42.478 06:44:53 -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:42.738 [2024-11-28 06:44:53.343106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.738 [2024-11-28 06:44:53.343152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:42.739 [2024-11-28 06:44:53.343165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:42.739 [2024-11-28 06:44:53.343175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.739 [2024-11-28 06:44:53.343200] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:42.739 [2024-11-28 06:44:53.343648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.739 [2024-11-28 06:44:53.343664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:42.739 [2024-11-28 06:44:53.343675] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:19:42.739 [2024-11-28 06:44:53.343684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.739 [2024-11-28 06:44:53.346380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.739 [2024-11-28 06:44:53.346413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:42.739 [2024-11-28 06:44:53.346427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.670 ms 00:19:42.739 [2024-11-28 06:44:53.346435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.739 [2024-11-28 06:44:53.362329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.739 [2024-11-28 06:44:53.362361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:42.739 [2024-11-28 06:44:53.362373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.871 ms 00:19:42.739 [2024-11-28 06:44:53.362380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.739 [2024-11-28 06:44:53.368509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.739 [2024-11-28 06:44:53.368541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:42.739 [2024-11-28 06:44:53.368557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.093 ms 00:19:42.739 [2024-11-28 06:44:53.368565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.739 [2024-11-28 06:44:53.370203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.739 [2024-11-28 06:44:53.370235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:42.739 [2024-11-28 06:44:53.370245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.569 ms 00:19:42.739 [2024-11-28 06:44:53.370252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.739 [2024-11-28 06:44:53.375158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.739 [2024-11-28 06:44:53.375193] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:42.739 [2024-11-28 06:44:53.375204] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.869 ms 00:19:42.739 [2024-11-28 06:44:53.375214] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.739 [2024-11-28 06:44:53.375343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.739 [2024-11-28 06:44:53.375353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:42.739 [2024-11-28 06:44:53.375362] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:19:42.739 [2024-11-28 06:44:53.375370] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.739 [2024-11-28 06:44:53.378123] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.739 [2024-11-28 06:44:53.378154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:42.739 [2024-11-28 06:44:53.378164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.732 ms 00:19:42.739 [2024-11-28 06:44:53.378171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.739 [2024-11-28 06:44:53.380484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.739 [2024-11-28 06:44:53.380513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:42.739 [2024-11-28 06:44:53.380524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.278 ms 00:19:42.739 [2024-11-28 06:44:53.380530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.739 [2024-11-28 06:44:53.382420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.739 [2024-11-28 06:44:53.382450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:42.739 [2024-11-28 06:44:53.382460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.855 ms 00:19:42.739 [2024-11-28 06:44:53.382466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.739 [2024-11-28 06:44:53.383869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.739 [2024-11-28 06:44:53.383904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:42.739 [2024-11-28 06:44:53.383914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.344 ms 00:19:42.739 [2024-11-28 06:44:53.383921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.739 [2024-11-28 06:44:53.383953] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:42.739 [2024-11-28 06:44:53.383966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.383978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.383985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.383996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:42.739 [2024-11-28 06:44:53.384376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:42.740 [2024-11-28 06:44:53.384843] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:42.740 [2024-11-28 06:44:53.384852] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9c3e096a-f90c-4ce9-b0ab-baccc446d2b8 00:19:42.740 [2024-11-28 06:44:53.384859] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:42.740 [2024-11-28 06:44:53.384870] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:42.740 [2024-11-28 06:44:53.384876] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:42.740 [2024-11-28 06:44:53.384891] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:42.740 [2024-11-28 06:44:53.384897] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:42.740 [2024-11-28 06:44:53.384906] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:42.740 [2024-11-28 06:44:53.384912] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:42.740 [2024-11-28 06:44:53.384918] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:42.740 [2024-11-28 06:44:53.384922] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:42.740 [2024-11-28 06:44:53.384929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.740 [2024-11-28 06:44:53.384935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:42.740 [2024-11-28 06:44:53.384942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.979 ms 00:19:42.740 [2024-11-28 06:44:53.384948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.740 [2024-11-28 06:44:53.386251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.740 [2024-11-28 06:44:53.386269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:42.740 [2024-11-28 06:44:53.386278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.288 ms 00:19:42.740 [2024-11-28 06:44:53.386283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.740 [2024-11-28 06:44:53.386342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.740 [2024-11-28 06:44:53.386349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:42.740 [2024-11-28 06:44:53.386357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:42.740 [2024-11-28 06:44:53.386362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.740 [2024-11-28 06:44:53.390982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.740 [2024-11-28 06:44:53.391116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:42.740 [2024-11-28 06:44:53.391131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.740 [2024-11-28 06:44:53.391138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.740 [2024-11-28 06:44:53.391185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.740 [2024-11-28 06:44:53.391192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:42.740 [2024-11-28 06:44:53.391199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.740 [2024-11-28 06:44:53.391205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.740 [2024-11-28 06:44:53.391257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.740 [2024-11-28 06:44:53.391265] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:42.740 [2024-11-28 06:44:53.391272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.740 [2024-11-28 06:44:53.391278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.740 [2024-11-28 06:44:53.391292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.740 [2024-11-28 06:44:53.391298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:42.740 [2024-11-28 06:44:53.391305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.740 [2024-11-28 06:44:53.391311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.740 [2024-11-28 06:44:53.399315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.740 [2024-11-28 06:44:53.399352] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:42.740 [2024-11-28 06:44:53.399362] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.740 [2024-11-28 06:44:53.399368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.740 [2024-11-28 06:44:53.402498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.740 [2024-11-28 06:44:53.402615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:42.740 [2024-11-28 06:44:53.402629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.740 [2024-11-28 06:44:53.402635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.740 [2024-11-28 06:44:53.402672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.741 [2024-11-28 06:44:53.402680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:42.741 [2024-11-28 06:44:53.402692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.741 [2024-11-28 06:44:53.402698] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.741 [2024-11-28 06:44:53.402747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.741 [2024-11-28 06:44:53.402754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:42.741 [2024-11-28 06:44:53.402762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.741 [2024-11-28 06:44:53.402767] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.741 [2024-11-28 06:44:53.402828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.741 [2024-11-28 06:44:53.402835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:42.741 [2024-11-28 06:44:53.402847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.741 [2024-11-28 06:44:53.402852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.741 [2024-11-28 06:44:53.402877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.741 [2024-11-28 06:44:53.402884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:42.741 [2024-11-28 06:44:53.402894] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.741 [2024-11-28 06:44:53.402899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.741 [2024-11-28 06:44:53.402932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.741 [2024-11-28 06:44:53.402939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:42.741 [2024-11-28 06:44:53.402948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.741 [2024-11-28 06:44:53.402954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.741 [2024-11-28 06:44:53.402988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.741 [2024-11-28 06:44:53.402995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:42.741 [2024-11-28 06:44:53.403002] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.741 [2024-11-28 06:44:53.403007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.741 [2024-11-28 06:44:53.403117] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 59.981 ms, result 0 00:19:42.741 true 00:19:42.741 06:44:53 -- ftl/dirty_shutdown.sh@83 -- # kill -9 85063 00:19:42.741 06:44:53 -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid85063 00:19:42.741 06:44:53 -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:19:42.741 [2024-11-28 06:44:53.464116] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:42.741 [2024-11-28 06:44:53.464195] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85623 ] 00:19:42.999 [2024-11-28 06:44:53.592502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.999 [2024-11-28 06:44:53.620235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.932  [2024-11-28T06:44:56.085Z] Copying: 261/1024 [MB] (261 MBps) [2024-11-28T06:44:57.021Z] Copying: 521/1024 [MB] (260 MBps) [2024-11-28T06:44:57.957Z] Copying: 775/1024 [MB] (254 MBps) [2024-11-28T06:44:57.957Z] Copying: 1024/1024 [MB] (average 258 MBps) 00:19:47.187 00:19:47.187 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 85063 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:19:47.187 06:44:57 -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:47.187 [2024-11-28 06:44:57.834675] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:47.187 [2024-11-28 06:44:57.834801] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85669 ] 00:19:47.446 [2024-11-28 06:44:57.968566] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.446 [2024-11-28 06:44:57.997069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.446 [2024-11-28 06:44:58.077210] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:47.446 [2024-11-28 06:44:58.077272] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:47.446 [2024-11-28 06:44:58.135940] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:19:47.446 [2024-11-28 06:44:58.136240] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:19:47.446 [2024-11-28 06:44:58.136477] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:19:47.707 [2024-11-28 06:44:58.312433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.707 [2024-11-28 06:44:58.312467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:47.707 [2024-11-28 06:44:58.312476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:47.707 [2024-11-28 06:44:58.312482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.707 [2024-11-28 06:44:58.312522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.707 [2024-11-28 06:44:58.312530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:47.707 [2024-11-28 06:44:58.312538] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:47.707 [2024-11-28 06:44:58.312543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.707 [2024-11-28 06:44:58.312555] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:47.707 [2024-11-28 06:44:58.312746] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:47.707 [2024-11-28 06:44:58.312757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.707 [2024-11-28 06:44:58.312765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:47.707 [2024-11-28 06:44:58.312772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:19:47.707 [2024-11-28 06:44:58.312783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.707 [2024-11-28 06:44:58.313699] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:47.707 [2024-11-28 06:44:58.315688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.707 [2024-11-28 06:44:58.315733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:47.707 [2024-11-28 06:44:58.315742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.990 ms 00:19:47.707 [2024-11-28 06:44:58.315748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.707 [2024-11-28 06:44:58.315790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.707 [2024-11-28 06:44:58.315798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:47.707 [2024-11-28 06:44:58.315804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:47.707 [2024-11-28 06:44:58.315809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.707 [2024-11-28 06:44:58.320187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.707 [2024-11-28 06:44:58.320214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:47.707 [2024-11-28 06:44:58.320227] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.347 ms 00:19:47.707 [2024-11-28 06:44:58.320238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.707 [2024-11-28 06:44:58.320290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.707 [2024-11-28 06:44:58.320297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:47.707 [2024-11-28 06:44:58.320303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:47.707 [2024-11-28 06:44:58.320317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.707 [2024-11-28 06:44:58.320357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.707 [2024-11-28 06:44:58.320364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:47.707 [2024-11-28 06:44:58.320370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:47.707 [2024-11-28 06:44:58.320378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.707 [2024-11-28 06:44:58.320396] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:47.707 [2024-11-28 06:44:58.321552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.707 [2024-11-28 06:44:58.321576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:47.707 [2024-11-28 06:44:58.321583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.162 ms 00:19:47.707 [2024-11-28 06:44:58.321593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.707 [2024-11-28 06:44:58.321617] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.707 [2024-11-28 06:44:58.321623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:47.707 [2024-11-28 06:44:58.321629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:47.707 [2024-11-28 06:44:58.321635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.707 [2024-11-28 06:44:58.321649] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:47.707 [2024-11-28 06:44:58.321663] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:47.707 [2024-11-28 06:44:58.321692] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:47.707 [2024-11-28 06:44:58.321722] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:47.707 [2024-11-28 06:44:58.321780] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:47.707 [2024-11-28 06:44:58.321788] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:47.707 [2024-11-28 06:44:58.321795] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:47.707 [2024-11-28 06:44:58.321803] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:47.707 [2024-11-28 06:44:58.321809] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:47.707 [2024-11-28 06:44:58.321815] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:47.707 [2024-11-28 06:44:58.321820] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:47.707 [2024-11-28 06:44:58.321827] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:47.707 [2024-11-28 06:44:58.321833] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:47.707 [2024-11-28 06:44:58.321839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.707 [2024-11-28 06:44:58.321845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:47.708 [2024-11-28 06:44:58.321850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:19:47.708 [2024-11-28 06:44:58.321856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.708 [2024-11-28 06:44:58.321901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.708 [2024-11-28 06:44:58.321907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:47.708 [2024-11-28 06:44:58.321912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:47.708 [2024-11-28 06:44:58.321918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.708 [2024-11-28 06:44:58.321974] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:47.708 [2024-11-28 06:44:58.321984] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:47.708 [2024-11-28 06:44:58.321990] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:47.708 [2024-11-28 06:44:58.321996] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:47.708 [2024-11-28 06:44:58.322001] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:47.708 [2024-11-28 06:44:58.322006] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:47.708 [2024-11-28 06:44:58.322012] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:47.708 [2024-11-28 06:44:58.322018] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:47.708 [2024-11-28 06:44:58.322023] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:47.708 [2024-11-28 06:44:58.322028] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:47.708 [2024-11-28 06:44:58.322034] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:47.708 [2024-11-28 06:44:58.322039] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:47.708 [2024-11-28 06:44:58.322044] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:47.708 [2024-11-28 06:44:58.322049] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:47.708 [2024-11-28 06:44:58.322054] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:19:47.708 [2024-11-28 06:44:58.322058] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:47.708 [2024-11-28 06:44:58.322063] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:47.708 [2024-11-28 06:44:58.322068] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:19:47.708 [2024-11-28 06:44:58.322073] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:47.708 [2024-11-28 06:44:58.322077] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:47.708 [2024-11-28 06:44:58.322083] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:19:47.708 [2024-11-28 06:44:58.322087] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:47.708 [2024-11-28 06:44:58.322096] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:47.708 [2024-11-28 06:44:58.322100] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:47.708 [2024-11-28 06:44:58.322105] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:47.708 [2024-11-28 06:44:58.322109] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:47.708 [2024-11-28 06:44:58.322114] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:19:47.708 [2024-11-28 06:44:58.322119] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:47.708 [2024-11-28 06:44:58.322123] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:47.708 [2024-11-28 06:44:58.322128] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:47.708 [2024-11-28 06:44:58.322133] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:47.708 [2024-11-28 06:44:58.322137] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:47.708 [2024-11-28 06:44:58.322142] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:19:47.708 [2024-11-28 06:44:58.322147] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:47.708 [2024-11-28 06:44:58.322151] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:47.708 [2024-11-28 06:44:58.322156] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:47.708 [2024-11-28 06:44:58.322161] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:47.708 [2024-11-28 06:44:58.322166] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:47.708 [2024-11-28 06:44:58.322175] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:19:47.708 [2024-11-28 06:44:58.322180] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:47.708 [2024-11-28 06:44:58.322184] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:47.708 [2024-11-28 06:44:58.322190] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:47.708 [2024-11-28 06:44:58.322196] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:47.708 [2024-11-28 06:44:58.322202] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:47.708 [2024-11-28 06:44:58.322207] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:47.708 [2024-11-28 06:44:58.322212] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:47.708 [2024-11-28 06:44:58.322217] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:47.708 [2024-11-28 06:44:58.322222] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:47.708 [2024-11-28 06:44:58.322227] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:47.708 [2024-11-28 06:44:58.322232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:47.708 [2024-11-28 06:44:58.322237] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:47.708 [2024-11-28 06:44:58.322244] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:47.708 [2024-11-28 06:44:58.322250] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:47.708 [2024-11-28 06:44:58.322256] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:19:47.708 [2024-11-28 06:44:58.322263] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:19:47.708 [2024-11-28 06:44:58.322268] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:19:47.708 [2024-11-28 06:44:58.322273] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:19:47.708 [2024-11-28 06:44:58.322279] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:19:47.708 [2024-11-28 06:44:58.322284] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:19:47.708 [2024-11-28 06:44:58.322289] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:19:47.708 [2024-11-28 06:44:58.322294] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:19:47.708 [2024-11-28 06:44:58.322300] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:19:47.708 [2024-11-28 06:44:58.322305] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:19:47.708 [2024-11-28 06:44:58.322310] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:19:47.708 [2024-11-28 06:44:58.322316] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:19:47.708 [2024-11-28 06:44:58.322321] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:47.708 [2024-11-28 06:44:58.322328] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:47.708 [2024-11-28 06:44:58.322334] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:47.709 [2024-11-28 06:44:58.322340] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:47.709 [2024-11-28 06:44:58.322345] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:47.709 [2024-11-28 06:44:58.322356] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:47.709 [2024-11-28 06:44:58.322362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.709 [2024-11-28 06:44:58.322367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:47.709 [2024-11-28 06:44:58.322373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:19:47.709 [2024-11-28 06:44:58.322379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.709 [2024-11-28 06:44:58.327699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.709 [2024-11-28 06:44:58.327727] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:47.709 [2024-11-28 06:44:58.327737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.290 ms 00:19:47.709 [2024-11-28 06:44:58.327743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.709 [2024-11-28 06:44:58.327817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.709 [2024-11-28 06:44:58.327824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:47.709 [2024-11-28 06:44:58.327830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:19:47.709 [2024-11-28 06:44:58.327835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.709 [2024-11-28 06:44:58.349455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.709 [2024-11-28 06:44:58.349509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:47.709 [2024-11-28 06:44:58.349525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.593 ms 00:19:47.709 [2024-11-28 06:44:58.349536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.709 [2024-11-28 06:44:58.349579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.709 [2024-11-28 06:44:58.349591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:47.709 [2024-11-28 06:44:58.349602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:47.709 [2024-11-28 06:44:58.349612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.709 [2024-11-28 06:44:58.350040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.709 [2024-11-28 06:44:58.350074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:47.709 [2024-11-28 06:44:58.350087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:19:47.709 [2024-11-28 06:44:58.350099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.709 [2024-11-28 06:44:58.350263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.709 [2024-11-28 06:44:58.350281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:47.709 [2024-11-28 06:44:58.350297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:19:47.709 [2024-11-28 06:44:58.350308] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.709 [2024-11-28 06:44:58.356476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.709 [2024-11-28 06:44:58.356515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:47.709 [2024-11-28 06:44:58.356527] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.137 ms 00:19:47.709 [2024-11-28 06:44:58.356545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.709 [2024-11-28 06:44:58.359255] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:47.709 [2024-11-28 06:44:58.359298] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:47.709 [2024-11-28 06:44:58.359311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.709 [2024-11-28 06:44:58.359325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:47.709 [2024-11-28 06:44:58.359336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.667 ms 00:19:47.709 [2024-11-28 06:44:58.359345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.709 [2024-11-28 06:44:58.370845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.709 [2024-11-28 06:44:58.370878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:47.709 [2024-11-28 06:44:58.370889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.455 ms 00:19:47.709 [2024-11-28 06:44:58.370894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.709 [2024-11-28 06:44:58.372642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.709 [2024-11-28 06:44:58.372671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:47.709 [2024-11-28 06:44:58.372677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.720 ms 00:19:47.709 [2024-11-28 06:44:58.372682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.709 [2024-11-28 06:44:58.373986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.709 [2024-11-28 06:44:58.374094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:47.709 [2024-11-28 06:44:58.374106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.262 ms 00:19:47.709 [2024-11-28 06:44:58.374111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.709 [2024-11-28 06:44:58.374263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.709 [2024-11-28 06:44:58.374271] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:47.709 [2024-11-28 06:44:58.374277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:19:47.709 [2024-11-28 06:44:58.374282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.709 [2024-11-28 06:44:58.390309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.709 [2024-11-28 06:44:58.390344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:47.709 [2024-11-28 06:44:58.390352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.015 ms 00:19:47.709 [2024-11-28 06:44:58.390358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.709 [2024-11-28 06:44:58.395876] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:47.709 [2024-11-28 06:44:58.397861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.709 [2024-11-28 06:44:58.397885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:47.709 [2024-11-28 06:44:58.397898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.462 ms 00:19:47.709 [2024-11-28 06:44:58.397904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.709 [2024-11-28 06:44:58.397949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.709 [2024-11-28 06:44:58.397957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:47.709 [2024-11-28 06:44:58.397965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:47.709 [2024-11-28 06:44:58.397971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.709 [2024-11-28 06:44:58.398006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.709 [2024-11-28 06:44:58.398013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:47.709 [2024-11-28 06:44:58.398019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:47.709 [2024-11-28 06:44:58.398024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.709 [2024-11-28 06:44:58.399010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.709 [2024-11-28 06:44:58.399035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:47.709 [2024-11-28 06:44:58.399042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.972 ms 00:19:47.709 [2024-11-28 06:44:58.399050] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.709 [2024-11-28 06:44:58.399066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.709 [2024-11-28 06:44:58.399077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:47.710 [2024-11-28 06:44:58.399082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:47.710 [2024-11-28 06:44:58.399087] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.710 [2024-11-28 06:44:58.399114] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:47.710 [2024-11-28 06:44:58.399122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.710 [2024-11-28 06:44:58.399127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:47.710 [2024-11-28 06:44:58.399132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:47.710 [2024-11-28 06:44:58.399138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.710 [2024-11-28 06:44:58.402125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.710 [2024-11-28 06:44:58.402153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:47.710 [2024-11-28 06:44:58.402166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.972 ms 00:19:47.710 [2024-11-28 06:44:58.402172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.710 [2024-11-28 06:44:58.402224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.710 [2024-11-28 06:44:58.402230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:47.710 [2024-11-28 06:44:58.402236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:19:47.710 [2024-11-28 06:44:58.402242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.710 [2024-11-28 06:44:58.402949] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 90.204 ms, result 0 00:19:48.648  [2024-11-28T06:45:00.811Z] Copying: 46/1024 [MB] (46 MBps) [2024-11-28T06:45:01.748Z] Copying: 86/1024 [MB] (39 MBps) [2024-11-28T06:45:02.684Z] Copying: 134/1024 [MB] (48 MBps) [2024-11-28T06:45:03.618Z] Copying: 179/1024 [MB] (44 MBps) [2024-11-28T06:45:04.553Z] Copying: 224/1024 [MB] (45 MBps) [2024-11-28T06:45:05.487Z] Copying: 270/1024 [MB] (46 MBps) [2024-11-28T06:45:06.422Z] Copying: 316/1024 [MB] (45 MBps) [2024-11-28T06:45:07.798Z] Copying: 361/1024 [MB] (45 MBps) [2024-11-28T06:45:08.732Z] Copying: 409/1024 [MB] (47 MBps) [2024-11-28T06:45:09.668Z] Copying: 458/1024 [MB] (49 MBps) [2024-11-28T06:45:10.603Z] Copying: 512/1024 [MB] (53 MBps) [2024-11-28T06:45:11.540Z] Copying: 558/1024 [MB] (45 MBps) [2024-11-28T06:45:12.476Z] Copying: 604/1024 [MB] (46 MBps) [2024-11-28T06:45:13.849Z] Copying: 650/1024 [MB] (45 MBps) [2024-11-28T06:45:14.416Z] Copying: 695/1024 [MB] (45 MBps) [2024-11-28T06:45:15.790Z] Copying: 740/1024 [MB] (45 MBps) [2024-11-28T06:45:16.725Z] Copying: 789/1024 [MB] (48 MBps) [2024-11-28T06:45:17.659Z] Copying: 834/1024 [MB] (45 MBps) [2024-11-28T06:45:18.590Z] Copying: 881/1024 [MB] (47 MBps) [2024-11-28T06:45:19.521Z] Copying: 925/1024 [MB] (44 MBps) [2024-11-28T06:45:20.456Z] Copying: 974/1024 [MB] (48 MBps) [2024-11-28T06:45:21.835Z] Copying: 1020/1024 [MB] (46 MBps) [2024-11-28T06:45:21.835Z] Copying: 1048456/1048576 [kB] (3136 kBps) [2024-11-28T06:45:21.835Z] Copying: 1024/1024 [MB] (average 44 MBps)[2024-11-28 06:45:21.522093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.065 [2024-11-28 06:45:21.522143] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:11.065 [2024-11-28 06:45:21.522155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:11.065 [2024-11-28 06:45:21.522162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.065 [2024-11-28 06:45:21.522722] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:11.065 [2024-11-28 06:45:21.524633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.065 [2024-11-28 06:45:21.524774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:11.065 [2024-11-28 06:45:21.524789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.892 ms 00:20:11.065 [2024-11-28 06:45:21.524796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.065 [2024-11-28 06:45:21.532932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.065 [2024-11-28 06:45:21.532960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:11.065 [2024-11-28 06:45:21.532969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.295 ms 00:20:11.065 [2024-11-28 06:45:21.532975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.065 [2024-11-28 06:45:21.547985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.065 [2024-11-28 06:45:21.548091] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:11.065 [2024-11-28 06:45:21.548105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.997 ms 00:20:11.065 [2024-11-28 06:45:21.548115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.065 [2024-11-28 06:45:21.552859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.065 [2024-11-28 06:45:21.552894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:11.065 [2024-11-28 06:45:21.552902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.721 ms 00:20:11.065 [2024-11-28 06:45:21.552909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.065 [2024-11-28 06:45:21.553861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.065 [2024-11-28 06:45:21.553887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:11.065 [2024-11-28 06:45:21.553894] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.910 ms 00:20:11.065 [2024-11-28 06:45:21.553900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.065 [2024-11-28 06:45:21.556854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.065 [2024-11-28 06:45:21.556883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:11.065 [2024-11-28 06:45:21.556895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.930 ms 00:20:11.065 [2024-11-28 06:45:21.556900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.065 [2024-11-28 06:45:21.612799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.065 [2024-11-28 06:45:21.612834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:11.065 [2024-11-28 06:45:21.612843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.871 ms 00:20:11.065 [2024-11-28 06:45:21.612850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.065 [2024-11-28 06:45:21.614519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.065 [2024-11-28 06:45:21.614550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:11.065 [2024-11-28 06:45:21.614558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.657 ms 00:20:11.065 [2024-11-28 06:45:21.614564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.065 [2024-11-28 06:45:21.615675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.065 [2024-11-28 06:45:21.615812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:11.065 [2024-11-28 06:45:21.615824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.088 ms 00:20:11.065 [2024-11-28 06:45:21.615830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.065 [2024-11-28 06:45:21.616643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.065 [2024-11-28 06:45:21.616668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:11.065 [2024-11-28 06:45:21.616675] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.790 ms 00:20:11.065 [2024-11-28 06:45:21.616680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.065 [2024-11-28 06:45:21.617549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.065 [2024-11-28 06:45:21.617577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:11.065 [2024-11-28 06:45:21.617585] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.805 ms 00:20:11.065 [2024-11-28 06:45:21.617590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.065 [2024-11-28 06:45:21.617612] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:11.065 [2024-11-28 06:45:21.617623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 127488 / 261120 wr_cnt: 1 state: open 00:20:11.065 [2024-11-28 06:45:21.617631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:11.065 [2024-11-28 06:45:21.617877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.617883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.617889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.617895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.617901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.617907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.617913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.617919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.617925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.617931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.617937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.617950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.617957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.617963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.617971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.617977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.617983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.617989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.617996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:11.066 [2024-11-28 06:45:21.618272] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:11.066 [2024-11-28 06:45:21.618278] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9c3e096a-f90c-4ce9-b0ab-baccc446d2b8 00:20:11.066 [2024-11-28 06:45:21.618287] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 127488 00:20:11.066 [2024-11-28 06:45:21.618296] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128448 00:20:11.066 [2024-11-28 06:45:21.618301] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 127488 00:20:11.066 [2024-11-28 06:45:21.618307] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0075 00:20:11.066 [2024-11-28 06:45:21.618314] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:11.066 [2024-11-28 06:45:21.618320] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:11.066 [2024-11-28 06:45:21.618326] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:11.066 [2024-11-28 06:45:21.618331] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:11.066 [2024-11-28 06:45:21.618336] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:11.066 [2024-11-28 06:45:21.618342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.066 [2024-11-28 06:45:21.618348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:11.066 [2024-11-28 06:45:21.618354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 00:20:11.066 [2024-11-28 06:45:21.618360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.066 [2024-11-28 06:45:21.619659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.066 [2024-11-28 06:45:21.619765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:11.066 [2024-11-28 06:45:21.619777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.287 ms 00:20:11.066 [2024-11-28 06:45:21.619783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.066 [2024-11-28 06:45:21.619835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.066 [2024-11-28 06:45:21.619841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:11.066 [2024-11-28 06:45:21.619848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:11.066 [2024-11-28 06:45:21.619856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.066 [2024-11-28 06:45:21.624512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.066 [2024-11-28 06:45:21.624537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:11.066 [2024-11-28 06:45:21.624551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.066 [2024-11-28 06:45:21.624557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.066 [2024-11-28 06:45:21.624603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.066 [2024-11-28 06:45:21.624610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:11.066 [2024-11-28 06:45:21.624616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.066 [2024-11-28 06:45:21.624623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.066 [2024-11-28 06:45:21.624678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.066 [2024-11-28 06:45:21.624686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:11.066 [2024-11-28 06:45:21.624695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.067 [2024-11-28 06:45:21.624700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.067 [2024-11-28 06:45:21.624740] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.067 [2024-11-28 06:45:21.624747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:11.067 [2024-11-28 06:45:21.624756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.067 [2024-11-28 06:45:21.624762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.067 [2024-11-28 06:45:21.632063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.067 [2024-11-28 06:45:21.632105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:11.067 [2024-11-28 06:45:21.632112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.067 [2024-11-28 06:45:21.632119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.067 [2024-11-28 06:45:21.635228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.067 [2024-11-28 06:45:21.635352] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:11.067 [2024-11-28 06:45:21.635364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.067 [2024-11-28 06:45:21.635374] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.067 [2024-11-28 06:45:21.635404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.067 [2024-11-28 06:45:21.635412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:11.067 [2024-11-28 06:45:21.635418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.067 [2024-11-28 06:45:21.635423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.067 [2024-11-28 06:45:21.635454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.067 [2024-11-28 06:45:21.635461] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:11.067 [2024-11-28 06:45:21.635466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.067 [2024-11-28 06:45:21.635473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.067 [2024-11-28 06:45:21.635526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.067 [2024-11-28 06:45:21.635533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:11.067 [2024-11-28 06:45:21.635539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.067 [2024-11-28 06:45:21.635545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.067 [2024-11-28 06:45:21.635570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.067 [2024-11-28 06:45:21.635577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:11.067 [2024-11-28 06:45:21.635583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.067 [2024-11-28 06:45:21.635591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.067 [2024-11-28 06:45:21.635619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.067 [2024-11-28 06:45:21.635626] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:11.067 [2024-11-28 06:45:21.635631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.067 [2024-11-28 06:45:21.635637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.067 [2024-11-28 06:45:21.635673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.067 [2024-11-28 06:45:21.635680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:11.067 [2024-11-28 06:45:21.635685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.067 [2024-11-28 06:45:21.635691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.067 [2024-11-28 06:45:21.635938] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 117.106 ms, result 0 00:20:12.021 00:20:12.021 00:20:12.021 06:45:22 -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:20:14.556 06:45:24 -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:14.556 [2024-11-28 06:45:24.886039] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:14.556 [2024-11-28 06:45:24.886180] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85960 ] 00:20:14.556 [2024-11-28 06:45:25.028064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.556 [2024-11-28 06:45:25.057950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.556 [2024-11-28 06:45:25.140761] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:14.556 [2024-11-28 06:45:25.140838] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:14.556 [2024-11-28 06:45:25.287415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.556 [2024-11-28 06:45:25.287465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:14.556 [2024-11-28 06:45:25.287479] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:14.556 [2024-11-28 06:45:25.287491] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.556 [2024-11-28 06:45:25.287547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.556 [2024-11-28 06:45:25.287557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:14.556 [2024-11-28 06:45:25.287565] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:14.556 [2024-11-28 06:45:25.287575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.556 [2024-11-28 06:45:25.287595] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:14.556 [2024-11-28 06:45:25.287854] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:14.556 [2024-11-28 06:45:25.287870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.556 [2024-11-28 06:45:25.287884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:14.556 [2024-11-28 06:45:25.287892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:20:14.556 [2024-11-28 06:45:25.287902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.556 [2024-11-28 06:45:25.289178] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:14.556 [2024-11-28 06:45:25.291444] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.556 [2024-11-28 06:45:25.291478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:14.556 [2024-11-28 06:45:25.291488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.268 ms 00:20:14.556 [2024-11-28 06:45:25.291500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.556 [2024-11-28 06:45:25.291551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.556 [2024-11-28 06:45:25.291561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:14.556 [2024-11-28 06:45:25.291569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:14.556 [2024-11-28 06:45:25.291575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.556 [2024-11-28 06:45:25.296176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.556 [2024-11-28 06:45:25.296367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:14.556 [2024-11-28 06:45:25.296384] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.546 ms 00:20:14.556 [2024-11-28 06:45:25.296391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.556 [2024-11-28 06:45:25.296474] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.556 [2024-11-28 06:45:25.296489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:14.556 [2024-11-28 06:45:25.296505] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:14.556 [2024-11-28 06:45:25.296516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.556 [2024-11-28 06:45:25.296559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.556 [2024-11-28 06:45:25.296569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:14.556 [2024-11-28 06:45:25.296577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:14.556 [2024-11-28 06:45:25.296587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.556 [2024-11-28 06:45:25.296609] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:14.556 [2024-11-28 06:45:25.297933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.556 [2024-11-28 06:45:25.297960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:14.556 [2024-11-28 06:45:25.297968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.331 ms 00:20:14.556 [2024-11-28 06:45:25.297975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.556 [2024-11-28 06:45:25.298005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.556 [2024-11-28 06:45:25.298013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:14.556 [2024-11-28 06:45:25.298023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:14.556 [2024-11-28 06:45:25.298035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.556 [2024-11-28 06:45:25.298054] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:14.556 [2024-11-28 06:45:25.298071] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:14.556 [2024-11-28 06:45:25.298102] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:14.556 [2024-11-28 06:45:25.298117] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:14.556 [2024-11-28 06:45:25.298187] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:14.556 [2024-11-28 06:45:25.298200] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:14.556 [2024-11-28 06:45:25.298209] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:14.556 [2024-11-28 06:45:25.298222] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:14.556 [2024-11-28 06:45:25.298230] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:14.556 [2024-11-28 06:45:25.298237] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:14.556 [2024-11-28 06:45:25.298244] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:14.556 [2024-11-28 06:45:25.298251] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:14.556 [2024-11-28 06:45:25.298257] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:14.556 [2024-11-28 06:45:25.298265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.556 [2024-11-28 06:45:25.298272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:14.556 [2024-11-28 06:45:25.298281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:20:14.556 [2024-11-28 06:45:25.298291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.556 [2024-11-28 06:45:25.298349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.556 [2024-11-28 06:45:25.298357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:14.556 [2024-11-28 06:45:25.298364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:20:14.556 [2024-11-28 06:45:25.298370] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.556 [2024-11-28 06:45:25.298437] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:14.556 [2024-11-28 06:45:25.298447] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:14.556 [2024-11-28 06:45:25.298459] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:14.556 [2024-11-28 06:45:25.298466] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.556 [2024-11-28 06:45:25.298476] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:14.556 [2024-11-28 06:45:25.298482] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:14.556 [2024-11-28 06:45:25.298489] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:14.556 [2024-11-28 06:45:25.298496] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:14.556 [2024-11-28 06:45:25.298503] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:14.556 [2024-11-28 06:45:25.298512] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:14.556 [2024-11-28 06:45:25.298519] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:14.556 [2024-11-28 06:45:25.298526] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:14.556 [2024-11-28 06:45:25.298532] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:14.556 [2024-11-28 06:45:25.298538] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:14.556 [2024-11-28 06:45:25.298545] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:14.556 [2024-11-28 06:45:25.298551] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.556 [2024-11-28 06:45:25.298557] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:14.556 [2024-11-28 06:45:25.298563] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:14.556 [2024-11-28 06:45:25.298569] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.556 [2024-11-28 06:45:25.298576] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:14.556 [2024-11-28 06:45:25.298583] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:14.556 [2024-11-28 06:45:25.298590] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:14.556 [2024-11-28 06:45:25.298598] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:14.556 [2024-11-28 06:45:25.298605] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:14.556 [2024-11-28 06:45:25.298612] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:14.556 [2024-11-28 06:45:25.298621] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:14.556 [2024-11-28 06:45:25.298629] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:14.556 [2024-11-28 06:45:25.298636] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:14.556 [2024-11-28 06:45:25.298642] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:14.556 [2024-11-28 06:45:25.298650] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:14.556 [2024-11-28 06:45:25.298657] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:14.556 [2024-11-28 06:45:25.298664] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:14.556 [2024-11-28 06:45:25.298671] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:14.556 [2024-11-28 06:45:25.298678] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:14.556 [2024-11-28 06:45:25.298685] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:14.556 [2024-11-28 06:45:25.298692] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:14.556 [2024-11-28 06:45:25.298699] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:14.556 [2024-11-28 06:45:25.298727] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:14.556 [2024-11-28 06:45:25.298735] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:14.556 [2024-11-28 06:45:25.298742] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:14.556 [2024-11-28 06:45:25.298749] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:14.556 [2024-11-28 06:45:25.298759] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:14.556 [2024-11-28 06:45:25.298770] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:14.556 [2024-11-28 06:45:25.298782] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.556 [2024-11-28 06:45:25.298791] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:14.556 [2024-11-28 06:45:25.298798] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:14.556 [2024-11-28 06:45:25.298806] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:14.556 [2024-11-28 06:45:25.298814] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:14.556 [2024-11-28 06:45:25.298821] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:14.556 [2024-11-28 06:45:25.298828] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:14.556 [2024-11-28 06:45:25.298836] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:14.556 [2024-11-28 06:45:25.298846] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:14.556 [2024-11-28 06:45:25.298855] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:14.556 [2024-11-28 06:45:25.298864] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:14.557 [2024-11-28 06:45:25.298872] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:14.557 [2024-11-28 06:45:25.298880] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:14.557 [2024-11-28 06:45:25.298887] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:14.557 [2024-11-28 06:45:25.298897] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:14.557 [2024-11-28 06:45:25.298905] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:14.557 [2024-11-28 06:45:25.298913] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:14.557 [2024-11-28 06:45:25.298921] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:14.557 [2024-11-28 06:45:25.298929] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:14.557 [2024-11-28 06:45:25.298936] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:14.557 [2024-11-28 06:45:25.298944] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:14.557 [2024-11-28 06:45:25.298953] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:14.557 [2024-11-28 06:45:25.298960] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:14.557 [2024-11-28 06:45:25.298967] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:14.557 [2024-11-28 06:45:25.298975] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:14.557 [2024-11-28 06:45:25.298982] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:14.557 [2024-11-28 06:45:25.298989] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:14.557 [2024-11-28 06:45:25.298996] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:14.557 [2024-11-28 06:45:25.299004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.557 [2024-11-28 06:45:25.299010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:14.557 [2024-11-28 06:45:25.299020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:20:14.557 [2024-11-28 06:45:25.299033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.557 [2024-11-28 06:45:25.305192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.557 [2024-11-28 06:45:25.305325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:14.557 [2024-11-28 06:45:25.305397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.122 ms 00:20:14.557 [2024-11-28 06:45:25.305431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.557 [2024-11-28 06:45:25.305630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.557 [2024-11-28 06:45:25.305668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:14.557 [2024-11-28 06:45:25.305780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:20:14.557 [2024-11-28 06:45:25.305806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.816 [2024-11-28 06:45:25.330797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.816 [2024-11-28 06:45:25.331158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:14.816 [2024-11-28 06:45:25.331480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.924 ms 00:20:14.816 [2024-11-28 06:45:25.331555] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.816 [2024-11-28 06:45:25.331618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.816 [2024-11-28 06:45:25.331722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:14.816 [2024-11-28 06:45:25.331787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:14.816 [2024-11-28 06:45:25.331820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.816 [2024-11-28 06:45:25.332248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.816 [2024-11-28 06:45:25.332364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:14.816 [2024-11-28 06:45:25.332427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:20:14.816 [2024-11-28 06:45:25.332504] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.816 [2024-11-28 06:45:25.332666] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.816 [2024-11-28 06:45:25.332700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:14.816 [2024-11-28 06:45:25.332773] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:20:14.816 [2024-11-28 06:45:25.332809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.816 [2024-11-28 06:45:25.338038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.816 [2024-11-28 06:45:25.338155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:14.816 [2024-11-28 06:45:25.338215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.190 ms 00:20:14.816 [2024-11-28 06:45:25.338291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.816 [2024-11-28 06:45:25.340587] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:20:14.816 [2024-11-28 06:45:25.340732] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:14.816 [2024-11-28 06:45:25.340747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.816 [2024-11-28 06:45:25.340756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:14.816 [2024-11-28 06:45:25.340766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.350 ms 00:20:14.816 [2024-11-28 06:45:25.340773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.816 [2024-11-28 06:45:25.355271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.816 [2024-11-28 06:45:25.355381] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:14.816 [2024-11-28 06:45:25.355435] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.453 ms 00:20:14.816 [2024-11-28 06:45:25.355457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.816 [2024-11-28 06:45:25.357214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.816 [2024-11-28 06:45:25.357325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:14.816 [2024-11-28 06:45:25.357408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.705 ms 00:20:14.816 [2024-11-28 06:45:25.357437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.816 [2024-11-28 06:45:25.358996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.816 [2024-11-28 06:45:25.359106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:14.816 [2024-11-28 06:45:25.359162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.434 ms 00:20:14.816 [2024-11-28 06:45:25.359263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.816 [2024-11-28 06:45:25.359502] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.816 [2024-11-28 06:45:25.359600] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:14.816 [2024-11-28 06:45:25.359655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:20:14.816 [2024-11-28 06:45:25.359748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.816 [2024-11-28 06:45:25.377839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.816 [2024-11-28 06:45:25.378014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:14.816 [2024-11-28 06:45:25.378072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.053 ms 00:20:14.816 [2024-11-28 06:45:25.378157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.816 [2024-11-28 06:45:25.385609] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:14.816 [2024-11-28 06:45:25.388145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.816 [2024-11-28 06:45:25.388260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:14.816 [2024-11-28 06:45:25.388366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.917 ms 00:20:14.816 [2024-11-28 06:45:25.388391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.816 [2024-11-28 06:45:25.388463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.816 [2024-11-28 06:45:25.388544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:14.816 [2024-11-28 06:45:25.388569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:14.816 [2024-11-28 06:45:25.388595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.816 [2024-11-28 06:45:25.389760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.816 [2024-11-28 06:45:25.389863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:14.816 [2024-11-28 06:45:25.389924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.113 ms 00:20:14.816 [2024-11-28 06:45:25.389946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.816 [2024-11-28 06:45:25.391288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.816 [2024-11-28 06:45:25.391385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:14.816 [2024-11-28 06:45:25.391435] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.294 ms 00:20:14.816 [2024-11-28 06:45:25.391456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.816 [2024-11-28 06:45:25.391496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.816 [2024-11-28 06:45:25.391548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:14.816 [2024-11-28 06:45:25.391596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:14.816 [2024-11-28 06:45:25.391615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.816 [2024-11-28 06:45:25.391658] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:14.816 [2024-11-28 06:45:25.391681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.816 [2024-11-28 06:45:25.391724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:14.816 [2024-11-28 06:45:25.391812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:14.816 [2024-11-28 06:45:25.391834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.816 [2024-11-28 06:45:25.395102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.816 [2024-11-28 06:45:25.395204] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:14.816 [2024-11-28 06:45:25.395256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.230 ms 00:20:14.816 [2024-11-28 06:45:25.395284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.816 [2024-11-28 06:45:25.395357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.816 [2024-11-28 06:45:25.395435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:14.816 [2024-11-28 06:45:25.395463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:14.816 [2024-11-28 06:45:25.395481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.816 [2024-11-28 06:45:25.401459] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 112.667 ms, result 0 00:20:16.190  [2024-11-28T06:45:27.895Z] Copying: 984/1048576 [kB] (984 kBps) [2024-11-28T06:45:28.831Z] Copying: 5520/1048576 [kB] (4536 kBps) [2024-11-28T06:45:29.771Z] Copying: 52/1024 [MB] (47 MBps) [2024-11-28T06:45:30.704Z] Copying: 103/1024 [MB] (50 MBps) [2024-11-28T06:45:31.640Z] Copying: 157/1024 [MB] (54 MBps) [2024-11-28T06:45:32.575Z] Copying: 212/1024 [MB] (54 MBps) [2024-11-28T06:45:33.951Z] Copying: 265/1024 [MB] (52 MBps) [2024-11-28T06:45:34.886Z] Copying: 309/1024 [MB] (43 MBps) [2024-11-28T06:45:35.818Z] Copying: 337/1024 [MB] (27 MBps) [2024-11-28T06:45:36.754Z] Copying: 359/1024 [MB] (21 MBps) [2024-11-28T06:45:37.689Z] Copying: 397/1024 [MB] (38 MBps) [2024-11-28T06:45:38.625Z] Copying: 427/1024 [MB] (30 MBps) [2024-11-28T06:45:40.001Z] Copying: 458/1024 [MB] (31 MBps) [2024-11-28T06:45:40.935Z] Copying: 502/1024 [MB] (44 MBps) [2024-11-28T06:45:41.883Z] Copying: 551/1024 [MB] (48 MBps) [2024-11-28T06:45:42.835Z] Copying: 605/1024 [MB] (53 MBps) [2024-11-28T06:45:43.770Z] Copying: 659/1024 [MB] (54 MBps) [2024-11-28T06:45:44.705Z] Copying: 715/1024 [MB] (55 MBps) [2024-11-28T06:45:45.639Z] Copying: 769/1024 [MB] (54 MBps) [2024-11-28T06:45:46.573Z] Copying: 823/1024 [MB] (53 MBps) [2024-11-28T06:45:47.964Z] Copying: 875/1024 [MB] (52 MBps) [2024-11-28T06:45:48.899Z] Copying: 931/1024 [MB] (55 MBps) [2024-11-28T06:45:49.466Z] Copying: 982/1024 [MB] (51 MBps) [2024-11-28T06:45:50.402Z] Copying: 1024/1024 [MB] (average 43 MBps)[2024-11-28 06:45:50.039334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.632 [2024-11-28 06:45:50.039391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:39.632 [2024-11-28 06:45:50.039405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:39.632 [2024-11-28 06:45:50.039412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.632 [2024-11-28 06:45:50.039434] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:39.632 [2024-11-28 06:45:50.039895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.632 [2024-11-28 06:45:50.039912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:39.632 [2024-11-28 06:45:50.039921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:20:39.632 [2024-11-28 06:45:50.039934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.632 [2024-11-28 06:45:50.040155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.632 [2024-11-28 06:45:50.040166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:39.632 [2024-11-28 06:45:50.040174] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:20:39.632 [2024-11-28 06:45:50.040182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.632 [2024-11-28 06:45:50.049780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.632 [2024-11-28 06:45:50.049966] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:39.632 [2024-11-28 06:45:50.049982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.576 ms 00:20:39.632 [2024-11-28 06:45:50.049990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.632 [2024-11-28 06:45:50.058185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.633 [2024-11-28 06:45:50.058216] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:39.633 [2024-11-28 06:45:50.058226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.160 ms 00:20:39.633 [2024-11-28 06:45:50.058234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.633 [2024-11-28 06:45:50.059749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.633 [2024-11-28 06:45:50.059778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:39.633 [2024-11-28 06:45:50.059786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.469 ms 00:20:39.633 [2024-11-28 06:45:50.059793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.633 [2024-11-28 06:45:50.063433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.633 [2024-11-28 06:45:50.063566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:39.633 [2024-11-28 06:45:50.063582] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.610 ms 00:20:39.633 [2024-11-28 06:45:50.063589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.633 [2024-11-28 06:45:50.066625] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.633 [2024-11-28 06:45:50.066741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:39.633 [2024-11-28 06:45:50.066756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.009 ms 00:20:39.633 [2024-11-28 06:45:50.066763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.633 [2024-11-28 06:45:50.068353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.633 [2024-11-28 06:45:50.068383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:39.633 [2024-11-28 06:45:50.068392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.573 ms 00:20:39.633 [2024-11-28 06:45:50.068398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.633 [2024-11-28 06:45:50.069463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.633 [2024-11-28 06:45:50.069492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:39.633 [2024-11-28 06:45:50.069501] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.039 ms 00:20:39.633 [2024-11-28 06:45:50.069508] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.633 [2024-11-28 06:45:50.070622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.633 [2024-11-28 06:45:50.070651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:39.633 [2024-11-28 06:45:50.070659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.086 ms 00:20:39.633 [2024-11-28 06:45:50.070666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.633 [2024-11-28 06:45:50.071895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.633 [2024-11-28 06:45:50.071922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:39.633 [2024-11-28 06:45:50.071931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.164 ms 00:20:39.633 [2024-11-28 06:45:50.071937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.633 [2024-11-28 06:45:50.071962] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:39.633 [2024-11-28 06:45:50.071976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:20:39.633 [2024-11-28 06:45:50.071986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:20:39.633 [2024-11-28 06:45:50.071993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:39.633 [2024-11-28 06:45:50.072452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:39.634 [2024-11-28 06:45:50.072737] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:39.634 [2024-11-28 06:45:50.072745] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9c3e096a-f90c-4ce9-b0ab-baccc446d2b8 00:20:39.634 [2024-11-28 06:45:50.072752] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:20:39.634 [2024-11-28 06:45:50.072762] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 139456 00:20:39.634 [2024-11-28 06:45:50.072772] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 137472 00:20:39.634 [2024-11-28 06:45:50.072780] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0144 00:20:39.634 [2024-11-28 06:45:50.072790] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:39.634 [2024-11-28 06:45:50.072797] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:39.634 [2024-11-28 06:45:50.072804] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:39.634 [2024-11-28 06:45:50.072811] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:39.634 [2024-11-28 06:45:50.072817] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:39.634 [2024-11-28 06:45:50.072823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.634 [2024-11-28 06:45:50.072830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:39.634 [2024-11-28 06:45:50.072838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.862 ms 00:20:39.634 [2024-11-28 06:45:50.072845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.634 [2024-11-28 06:45:50.074173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.634 [2024-11-28 06:45:50.074193] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:39.634 [2024-11-28 06:45:50.074201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.314 ms 00:20:39.634 [2024-11-28 06:45:50.074209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.634 [2024-11-28 06:45:50.074260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.634 [2024-11-28 06:45:50.074267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:39.634 [2024-11-28 06:45:50.074275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:39.634 [2024-11-28 06:45:50.074286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.634 [2024-11-28 06:45:50.079298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.634 [2024-11-28 06:45:50.079331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:39.634 [2024-11-28 06:45:50.079340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.634 [2024-11-28 06:45:50.079348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.634 [2024-11-28 06:45:50.079395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.634 [2024-11-28 06:45:50.079403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:39.634 [2024-11-28 06:45:50.079410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.634 [2024-11-28 06:45:50.079417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.634 [2024-11-28 06:45:50.079469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.634 [2024-11-28 06:45:50.079478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:39.634 [2024-11-28 06:45:50.079486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.634 [2024-11-28 06:45:50.079493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.634 [2024-11-28 06:45:50.079508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.634 [2024-11-28 06:45:50.079515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:39.634 [2024-11-28 06:45:50.079527] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.634 [2024-11-28 06:45:50.079534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.634 [2024-11-28 06:45:50.087572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.634 [2024-11-28 06:45:50.087611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:39.634 [2024-11-28 06:45:50.087625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.634 [2024-11-28 06:45:50.087633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.634 [2024-11-28 06:45:50.091226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.634 [2024-11-28 06:45:50.091257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:39.634 [2024-11-28 06:45:50.091266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.634 [2024-11-28 06:45:50.091273] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.634 [2024-11-28 06:45:50.091325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.634 [2024-11-28 06:45:50.091337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:39.634 [2024-11-28 06:45:50.091345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.634 [2024-11-28 06:45:50.091352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.634 [2024-11-28 06:45:50.091374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.634 [2024-11-28 06:45:50.091383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:39.634 [2024-11-28 06:45:50.091390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.634 [2024-11-28 06:45:50.091401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.634 [2024-11-28 06:45:50.091462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.634 [2024-11-28 06:45:50.091471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:39.634 [2024-11-28 06:45:50.091481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.634 [2024-11-28 06:45:50.091488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.634 [2024-11-28 06:45:50.091514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.634 [2024-11-28 06:45:50.091522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:39.634 [2024-11-28 06:45:50.091530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.634 [2024-11-28 06:45:50.091537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.634 [2024-11-28 06:45:50.091567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.634 [2024-11-28 06:45:50.091575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:39.634 [2024-11-28 06:45:50.091588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.635 [2024-11-28 06:45:50.091594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.635 [2024-11-28 06:45:50.091632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:39.635 [2024-11-28 06:45:50.091646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:39.635 [2024-11-28 06:45:50.091654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:39.635 [2024-11-28 06:45:50.091661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.635 [2024-11-28 06:45:50.091798] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 52.432 ms, result 0 00:20:39.635 00:20:39.635 00:20:39.635 06:45:50 -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:41.536 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:41.536 06:45:52 -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:41.536 [2024-11-28 06:45:52.290005] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:41.536 [2024-11-28 06:45:52.290118] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86251 ] 00:20:41.794 [2024-11-28 06:45:52.425321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.794 [2024-11-28 06:45:52.454547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.794 [2024-11-28 06:45:52.536323] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:41.795 [2024-11-28 06:45:52.536397] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:42.054 [2024-11-28 06:45:52.682375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.054 [2024-11-28 06:45:52.682537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:42.054 [2024-11-28 06:45:52.682555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:42.054 [2024-11-28 06:45:52.682564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.054 [2024-11-28 06:45:52.682632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.054 [2024-11-28 06:45:52.682645] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:42.054 [2024-11-28 06:45:52.682653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:42.055 [2024-11-28 06:45:52.682663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.055 [2024-11-28 06:45:52.682682] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:42.055 [2024-11-28 06:45:52.682931] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:42.055 [2024-11-28 06:45:52.682946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.055 [2024-11-28 06:45:52.682955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:42.055 [2024-11-28 06:45:52.682967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:20:42.055 [2024-11-28 06:45:52.682974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.055 [2024-11-28 06:45:52.683975] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:42.055 [2024-11-28 06:45:52.686216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.055 [2024-11-28 06:45:52.686249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:42.055 [2024-11-28 06:45:52.686259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.243 ms 00:20:42.055 [2024-11-28 06:45:52.686270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.055 [2024-11-28 06:45:52.686323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.055 [2024-11-28 06:45:52.686332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:42.055 [2024-11-28 06:45:52.686343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:42.055 [2024-11-28 06:45:52.686350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.055 [2024-11-28 06:45:52.690929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.055 [2024-11-28 06:45:52.690961] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:42.055 [2024-11-28 06:45:52.690970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.519 ms 00:20:42.055 [2024-11-28 06:45:52.690977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.055 [2024-11-28 06:45:52.691048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.055 [2024-11-28 06:45:52.691057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:42.055 [2024-11-28 06:45:52.691065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:20:42.055 [2024-11-28 06:45:52.691072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.055 [2024-11-28 06:45:52.691110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.055 [2024-11-28 06:45:52.691119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:42.055 [2024-11-28 06:45:52.691126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:42.055 [2024-11-28 06:45:52.691142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.055 [2024-11-28 06:45:52.691164] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:42.055 [2024-11-28 06:45:52.692450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.055 [2024-11-28 06:45:52.692479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:42.055 [2024-11-28 06:45:52.692487] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.293 ms 00:20:42.055 [2024-11-28 06:45:52.692498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.055 [2024-11-28 06:45:52.692528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.055 [2024-11-28 06:45:52.692536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:42.055 [2024-11-28 06:45:52.692546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:42.055 [2024-11-28 06:45:52.692553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.055 [2024-11-28 06:45:52.692572] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:42.055 [2024-11-28 06:45:52.692592] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:42.055 [2024-11-28 06:45:52.692627] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:42.055 [2024-11-28 06:45:52.692641] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:42.055 [2024-11-28 06:45:52.692726] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:42.055 [2024-11-28 06:45:52.692739] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:42.055 [2024-11-28 06:45:52.692755] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:42.055 [2024-11-28 06:45:52.692768] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:42.055 [2024-11-28 06:45:52.692779] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:42.055 [2024-11-28 06:45:52.692787] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:42.055 [2024-11-28 06:45:52.692794] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:42.055 [2024-11-28 06:45:52.692805] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:42.055 [2024-11-28 06:45:52.692812] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:42.055 [2024-11-28 06:45:52.692818] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.055 [2024-11-28 06:45:52.692828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:42.055 [2024-11-28 06:45:52.692836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:20:42.055 [2024-11-28 06:45:52.692845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.055 [2024-11-28 06:45:52.692908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.055 [2024-11-28 06:45:52.692916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:42.055 [2024-11-28 06:45:52.692923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:20:42.055 [2024-11-28 06:45:52.692934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.055 [2024-11-28 06:45:52.693011] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:42.055 [2024-11-28 06:45:52.693019] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:42.055 [2024-11-28 06:45:52.693027] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:42.055 [2024-11-28 06:45:52.693033] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.055 [2024-11-28 06:45:52.693043] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:42.055 [2024-11-28 06:45:52.693049] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:42.055 [2024-11-28 06:45:52.693056] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:42.055 [2024-11-28 06:45:52.693062] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:42.055 [2024-11-28 06:45:52.693068] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:42.055 [2024-11-28 06:45:52.693074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:42.055 [2024-11-28 06:45:52.693080] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:42.055 [2024-11-28 06:45:52.693087] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:42.055 [2024-11-28 06:45:52.693093] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:42.055 [2024-11-28 06:45:52.693100] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:42.055 [2024-11-28 06:45:52.693108] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:42.055 [2024-11-28 06:45:52.693116] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.055 [2024-11-28 06:45:52.693124] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:42.055 [2024-11-28 06:45:52.693131] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:42.055 [2024-11-28 06:45:52.693140] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.055 [2024-11-28 06:45:52.693148] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:42.055 [2024-11-28 06:45:52.693155] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:42.055 [2024-11-28 06:45:52.693163] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:42.055 [2024-11-28 06:45:52.693171] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:42.055 [2024-11-28 06:45:52.693178] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:42.055 [2024-11-28 06:45:52.693185] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:42.055 [2024-11-28 06:45:52.693192] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:42.055 [2024-11-28 06:45:52.693200] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:42.055 [2024-11-28 06:45:52.693207] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:42.055 [2024-11-28 06:45:52.693214] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:42.055 [2024-11-28 06:45:52.693221] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:42.055 [2024-11-28 06:45:52.693228] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:42.055 [2024-11-28 06:45:52.693239] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:42.055 [2024-11-28 06:45:52.693246] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:42.055 [2024-11-28 06:45:52.693253] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:42.055 [2024-11-28 06:45:52.693261] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:42.055 [2024-11-28 06:45:52.693268] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:42.055 [2024-11-28 06:45:52.693275] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:42.055 [2024-11-28 06:45:52.693282] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:42.055 [2024-11-28 06:45:52.693289] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:42.055 [2024-11-28 06:45:52.693296] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:42.055 [2024-11-28 06:45:52.693303] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:42.055 [2024-11-28 06:45:52.693311] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:42.055 [2024-11-28 06:45:52.693318] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:42.055 [2024-11-28 06:45:52.693326] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.055 [2024-11-28 06:45:52.693334] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:42.056 [2024-11-28 06:45:52.693341] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:42.056 [2024-11-28 06:45:52.693349] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:42.056 [2024-11-28 06:45:52.693358] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:42.056 [2024-11-28 06:45:52.693365] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:42.056 [2024-11-28 06:45:52.693372] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:42.056 [2024-11-28 06:45:52.693381] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:42.056 [2024-11-28 06:45:52.693391] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:42.056 [2024-11-28 06:45:52.693404] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:42.056 [2024-11-28 06:45:52.693412] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:42.056 [2024-11-28 06:45:52.693420] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:42.056 [2024-11-28 06:45:52.693428] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:42.056 [2024-11-28 06:45:52.693436] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:42.056 [2024-11-28 06:45:52.693445] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:42.056 [2024-11-28 06:45:52.693452] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:42.056 [2024-11-28 06:45:52.693460] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:42.056 [2024-11-28 06:45:52.693468] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:42.056 [2024-11-28 06:45:52.693476] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:42.056 [2024-11-28 06:45:52.693483] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:42.056 [2024-11-28 06:45:52.693492] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:42.056 [2024-11-28 06:45:52.693500] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:42.056 [2024-11-28 06:45:52.693506] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:42.056 [2024-11-28 06:45:52.693514] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:42.056 [2024-11-28 06:45:52.693525] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:42.056 [2024-11-28 06:45:52.693534] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:42.056 [2024-11-28 06:45:52.693541] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:42.056 [2024-11-28 06:45:52.693548] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:42.056 [2024-11-28 06:45:52.693555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.056 [2024-11-28 06:45:52.693562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:42.056 [2024-11-28 06:45:52.693569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:20:42.056 [2024-11-28 06:45:52.693584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.056 [2024-11-28 06:45:52.699351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.056 [2024-11-28 06:45:52.699388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:42.056 [2024-11-28 06:45:52.699396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.734 ms 00:20:42.056 [2024-11-28 06:45:52.699404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.056 [2024-11-28 06:45:52.699483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.056 [2024-11-28 06:45:52.699491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:42.056 [2024-11-28 06:45:52.699499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:42.056 [2024-11-28 06:45:52.699506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.056 [2024-11-28 06:45:52.711999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.056 [2024-11-28 06:45:52.712040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:42.056 [2024-11-28 06:45:52.712054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.452 ms 00:20:42.056 [2024-11-28 06:45:52.712063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.056 [2024-11-28 06:45:52.712100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.056 [2024-11-28 06:45:52.712109] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:42.056 [2024-11-28 06:45:52.712118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:42.056 [2024-11-28 06:45:52.712128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.056 [2024-11-28 06:45:52.712477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.056 [2024-11-28 06:45:52.712500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:42.056 [2024-11-28 06:45:52.712515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:20:42.056 [2024-11-28 06:45:52.712522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.056 [2024-11-28 06:45:52.712635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.056 [2024-11-28 06:45:52.712645] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:42.056 [2024-11-28 06:45:52.712653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:20:42.056 [2024-11-28 06:45:52.712661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.056 [2024-11-28 06:45:52.717860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.056 [2024-11-28 06:45:52.717890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:42.056 [2024-11-28 06:45:52.717899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.176 ms 00:20:42.056 [2024-11-28 06:45:52.717910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.056 [2024-11-28 06:45:52.720145] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:42.056 [2024-11-28 06:45:52.720182] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:42.056 [2024-11-28 06:45:52.720191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.056 [2024-11-28 06:45:52.720199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:42.056 [2024-11-28 06:45:52.720208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.203 ms 00:20:42.056 [2024-11-28 06:45:52.720214] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.056 [2024-11-28 06:45:52.734565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.056 [2024-11-28 06:45:52.734604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:42.056 [2024-11-28 06:45:52.734614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.315 ms 00:20:42.056 [2024-11-28 06:45:52.734622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.056 [2024-11-28 06:45:52.736304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.056 [2024-11-28 06:45:52.736332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:42.056 [2024-11-28 06:45:52.736341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.647 ms 00:20:42.056 [2024-11-28 06:45:52.736347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.056 [2024-11-28 06:45:52.737617] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.056 [2024-11-28 06:45:52.737646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:42.056 [2024-11-28 06:45:52.737654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.236 ms 00:20:42.056 [2024-11-28 06:45:52.737662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.056 [2024-11-28 06:45:52.737864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.056 [2024-11-28 06:45:52.737875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:42.056 [2024-11-28 06:45:52.737883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:20:42.056 [2024-11-28 06:45:52.737892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.056 [2024-11-28 06:45:52.755667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.056 [2024-11-28 06:45:52.755739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:42.056 [2024-11-28 06:45:52.755751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.760 ms 00:20:42.056 [2024-11-28 06:45:52.755759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.056 [2024-11-28 06:45:52.763437] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:42.056 [2024-11-28 06:45:52.765912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.056 [2024-11-28 06:45:52.765953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:42.056 [2024-11-28 06:45:52.765965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.825 ms 00:20:42.056 [2024-11-28 06:45:52.765974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.056 [2024-11-28 06:45:52.766040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.056 [2024-11-28 06:45:52.766052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:42.056 [2024-11-28 06:45:52.766062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:42.056 [2024-11-28 06:45:52.766074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.056 [2024-11-28 06:45:52.766591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.056 [2024-11-28 06:45:52.766624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:42.056 [2024-11-28 06:45:52.766633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.493 ms 00:20:42.056 [2024-11-28 06:45:52.766641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.056 [2024-11-28 06:45:52.767890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.056 [2024-11-28 06:45:52.767920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:42.056 [2024-11-28 06:45:52.767928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.218 ms 00:20:42.056 [2024-11-28 06:45:52.767935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.056 [2024-11-28 06:45:52.767961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.056 [2024-11-28 06:45:52.767969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:42.056 [2024-11-28 06:45:52.767979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:42.057 [2024-11-28 06:45:52.767986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.057 [2024-11-28 06:45:52.768016] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:42.057 [2024-11-28 06:45:52.768033] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.057 [2024-11-28 06:45:52.768043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:42.057 [2024-11-28 06:45:52.768050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:42.057 [2024-11-28 06:45:52.768057] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.057 [2024-11-28 06:45:52.771441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.057 [2024-11-28 06:45:52.771475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:42.057 [2024-11-28 06:45:52.771484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.368 ms 00:20:42.057 [2024-11-28 06:45:52.771497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.057 [2024-11-28 06:45:52.771557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.057 [2024-11-28 06:45:52.771566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:42.057 [2024-11-28 06:45:52.771577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:42.057 [2024-11-28 06:45:52.771584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.057 [2024-11-28 06:45:52.772520] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 89.741 ms, result 0 00:20:43.430  [2024-11-28T06:45:55.134Z] Copying: 47/1024 [MB] (47 MBps) [2024-11-28T06:45:56.066Z] Copying: 90/1024 [MB] (42 MBps) [2024-11-28T06:45:57.000Z] Copying: 110/1024 [MB] (20 MBps) [2024-11-28T06:45:58.375Z] Copying: 135/1024 [MB] (25 MBps) [2024-11-28T06:45:58.955Z] Copying: 169/1024 [MB] (33 MBps) [2024-11-28T06:46:00.364Z] Copying: 199/1024 [MB] (29 MBps) [2024-11-28T06:46:01.298Z] Copying: 223/1024 [MB] (24 MBps) [2024-11-28T06:46:02.230Z] Copying: 247/1024 [MB] (24 MBps) [2024-11-28T06:46:03.163Z] Copying: 261/1024 [MB] (13 MBps) [2024-11-28T06:46:04.097Z] Copying: 289/1024 [MB] (28 MBps) [2024-11-28T06:46:05.031Z] Copying: 303/1024 [MB] (14 MBps) [2024-11-28T06:46:05.961Z] Copying: 332/1024 [MB] (28 MBps) [2024-11-28T06:46:07.336Z] Copying: 345/1024 [MB] (12 MBps) [2024-11-28T06:46:08.271Z] Copying: 357/1024 [MB] (11 MBps) [2024-11-28T06:46:09.207Z] Copying: 374/1024 [MB] (16 MBps) [2024-11-28T06:46:10.143Z] Copying: 388/1024 [MB] (14 MBps) [2024-11-28T06:46:11.079Z] Copying: 405/1024 [MB] (16 MBps) [2024-11-28T06:46:12.014Z] Copying: 430/1024 [MB] (25 MBps) [2024-11-28T06:46:12.949Z] Copying: 448/1024 [MB] (17 MBps) [2024-11-28T06:46:14.325Z] Copying: 467/1024 [MB] (19 MBps) [2024-11-28T06:46:15.259Z] Copying: 491/1024 [MB] (24 MBps) [2024-11-28T06:46:16.193Z] Copying: 510/1024 [MB] (18 MBps) [2024-11-28T06:46:17.138Z] Copying: 532/1024 [MB] (22 MBps) [2024-11-28T06:46:18.084Z] Copying: 544/1024 [MB] (11 MBps) [2024-11-28T06:46:19.019Z] Copying: 556/1024 [MB] (11 MBps) [2024-11-28T06:46:19.953Z] Copying: 568/1024 [MB] (11 MBps) [2024-11-28T06:46:21.327Z] Copying: 580/1024 [MB] (11 MBps) [2024-11-28T06:46:22.259Z] Copying: 591/1024 [MB] (11 MBps) [2024-11-28T06:46:23.193Z] Copying: 608/1024 [MB] (16 MBps) [2024-11-28T06:46:24.130Z] Copying: 619/1024 [MB] (11 MBps) [2024-11-28T06:46:25.068Z] Copying: 635/1024 [MB] (15 MBps) [2024-11-28T06:46:26.010Z] Copying: 652/1024 [MB] (17 MBps) [2024-11-28T06:46:26.944Z] Copying: 673/1024 [MB] (20 MBps) [2024-11-28T06:46:28.320Z] Copying: 691/1024 [MB] (17 MBps) [2024-11-28T06:46:29.252Z] Copying: 713/1024 [MB] (21 MBps) [2024-11-28T06:46:30.186Z] Copying: 739/1024 [MB] (26 MBps) [2024-11-28T06:46:31.122Z] Copying: 759/1024 [MB] (19 MBps) [2024-11-28T06:46:32.059Z] Copying: 771/1024 [MB] (11 MBps) [2024-11-28T06:46:32.994Z] Copying: 782/1024 [MB] (11 MBps) [2024-11-28T06:46:34.369Z] Copying: 794/1024 [MB] (11 MBps) [2024-11-28T06:46:34.960Z] Copying: 806/1024 [MB] (11 MBps) [2024-11-28T06:46:36.349Z] Copying: 817/1024 [MB] (11 MBps) [2024-11-28T06:46:37.285Z] Copying: 829/1024 [MB] (11 MBps) [2024-11-28T06:46:38.221Z] Copying: 840/1024 [MB] (11 MBps) [2024-11-28T06:46:39.153Z] Copying: 852/1024 [MB] (11 MBps) [2024-11-28T06:46:40.086Z] Copying: 864/1024 [MB] (12 MBps) [2024-11-28T06:46:41.021Z] Copying: 876/1024 [MB] (12 MBps) [2024-11-28T06:46:41.956Z] Copying: 887/1024 [MB] (11 MBps) [2024-11-28T06:46:43.333Z] Copying: 899/1024 [MB] (11 MBps) [2024-11-28T06:46:44.268Z] Copying: 911/1024 [MB] (12 MBps) [2024-11-28T06:46:45.205Z] Copying: 926/1024 [MB] (14 MBps) [2024-11-28T06:46:46.139Z] Copying: 938/1024 [MB] (11 MBps) [2024-11-28T06:46:47.075Z] Copying: 949/1024 [MB] (11 MBps) [2024-11-28T06:46:48.011Z] Copying: 961/1024 [MB] (11 MBps) [2024-11-28T06:46:48.946Z] Copying: 976/1024 [MB] (15 MBps) [2024-11-28T06:46:50.324Z] Copying: 988/1024 [MB] (11 MBps) [2024-11-28T06:46:51.260Z] Copying: 1000/1024 [MB] (11 MBps) [2024-11-28T06:46:52.250Z] Copying: 1011/1024 [MB] (11 MBps) [2024-11-28T06:46:52.251Z] Copying: 1023/1024 [MB] (11 MBps) [2024-11-28T06:46:52.251Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-28 06:46:52.239926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.481 [2024-11-28 06:46:52.239989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:41.481 [2024-11-28 06:46:52.240002] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:41.481 [2024-11-28 06:46:52.240010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.481 [2024-11-28 06:46:52.240030] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:41.481 [2024-11-28 06:46:52.240503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.481 [2024-11-28 06:46:52.240537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:41.481 [2024-11-28 06:46:52.240547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:21:41.481 [2024-11-28 06:46:52.240554] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.481 [2024-11-28 06:46:52.240927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.481 [2024-11-28 06:46:52.240952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:41.481 [2024-11-28 06:46:52.240967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:21:41.481 [2024-11-28 06:46:52.240979] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.481 [2024-11-28 06:46:52.246586] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.481 [2024-11-28 06:46:52.246631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:41.481 [2024-11-28 06:46:52.246650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.584 ms 00:21:41.481 [2024-11-28 06:46:52.246661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.741 [2024-11-28 06:46:52.256457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.741 [2024-11-28 06:46:52.256489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:21:41.741 [2024-11-28 06:46:52.256499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.772 ms 00:21:41.741 [2024-11-28 06:46:52.256513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.741 [2024-11-28 06:46:52.258197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.741 [2024-11-28 06:46:52.258229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:41.741 [2024-11-28 06:46:52.258238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.617 ms 00:21:41.741 [2024-11-28 06:46:52.258245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.741 [2024-11-28 06:46:52.262468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.741 [2024-11-28 06:46:52.262507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:41.741 [2024-11-28 06:46:52.262516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.192 ms 00:21:41.741 [2024-11-28 06:46:52.262523] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.741 [2024-11-28 06:46:52.268373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.741 [2024-11-28 06:46:52.268415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:41.741 [2024-11-28 06:46:52.268427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.829 ms 00:21:41.741 [2024-11-28 06:46:52.268436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.741 [2024-11-28 06:46:52.270356] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.741 [2024-11-28 06:46:52.270387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:41.741 [2024-11-28 06:46:52.270396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.904 ms 00:21:41.741 [2024-11-28 06:46:52.270403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.741 [2024-11-28 06:46:52.272182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.741 [2024-11-28 06:46:52.272224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:41.741 [2024-11-28 06:46:52.272235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.751 ms 00:21:41.741 [2024-11-28 06:46:52.272242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.741 [2024-11-28 06:46:52.273507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.741 [2024-11-28 06:46:52.273535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:41.741 [2024-11-28 06:46:52.273544] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.222 ms 00:21:41.741 [2024-11-28 06:46:52.273551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.741 [2024-11-28 06:46:52.275552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.741 [2024-11-28 06:46:52.275593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:41.741 [2024-11-28 06:46:52.275603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.949 ms 00:21:41.741 [2024-11-28 06:46:52.275610] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.741 [2024-11-28 06:46:52.275640] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:41.741 [2024-11-28 06:46:52.275655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:21:41.741 [2024-11-28 06:46:52.275671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:21:41.741 [2024-11-28 06:46:52.275679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:41.741 [2024-11-28 06:46:52.275686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:41.741 [2024-11-28 06:46:52.275694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:41.741 [2024-11-28 06:46:52.275701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:41.741 [2024-11-28 06:46:52.275720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:41.741 [2024-11-28 06:46:52.275727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:41.741 [2024-11-28 06:46:52.275735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:41.741 [2024-11-28 06:46:52.275742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:41.741 [2024-11-28 06:46:52.275750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:41.741 [2024-11-28 06:46:52.275757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:41.741 [2024-11-28 06:46:52.275764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:41.741 [2024-11-28 06:46:52.275771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:41.741 [2024-11-28 06:46:52.275780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:41.741 [2024-11-28 06:46:52.275787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.275794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.275801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.275810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.275817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.275824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.275831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.275838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.275845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.275852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.275859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.275866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.275873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.275880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.275887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.275895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.275902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.275909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.275917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.275924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.275931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.275939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.275946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.275953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.275960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.275967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.275974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.275981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.275989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.275995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:41.742 [2024-11-28 06:46:52.276435] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:41.743 [2024-11-28 06:46:52.276443] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9c3e096a-f90c-4ce9-b0ab-baccc446d2b8 00:21:41.743 [2024-11-28 06:46:52.276452] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:21:41.743 [2024-11-28 06:46:52.276458] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:41.743 [2024-11-28 06:46:52.276465] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:41.743 [2024-11-28 06:46:52.276472] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:41.743 [2024-11-28 06:46:52.276479] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:41.743 [2024-11-28 06:46:52.276487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:41.743 [2024-11-28 06:46:52.276494] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:41.743 [2024-11-28 06:46:52.276500] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:41.743 [2024-11-28 06:46:52.276506] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:41.743 [2024-11-28 06:46:52.276513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.743 [2024-11-28 06:46:52.276521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:41.743 [2024-11-28 06:46:52.276529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.873 ms 00:21:41.743 [2024-11-28 06:46:52.276538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.743 [2024-11-28 06:46:52.277881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.743 [2024-11-28 06:46:52.277904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:41.743 [2024-11-28 06:46:52.277913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.328 ms 00:21:41.743 [2024-11-28 06:46:52.277921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.743 [2024-11-28 06:46:52.277992] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.743 [2024-11-28 06:46:52.278006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:41.743 [2024-11-28 06:46:52.278019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:41.743 [2024-11-28 06:46:52.278026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.743 [2024-11-28 06:46:52.282845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.743 [2024-11-28 06:46:52.282874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:41.743 [2024-11-28 06:46:52.282884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.743 [2024-11-28 06:46:52.282891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.743 [2024-11-28 06:46:52.282937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.743 [2024-11-28 06:46:52.282949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:41.743 [2024-11-28 06:46:52.282956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.743 [2024-11-28 06:46:52.282963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.743 [2024-11-28 06:46:52.283014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.743 [2024-11-28 06:46:52.283024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:41.743 [2024-11-28 06:46:52.283032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.743 [2024-11-28 06:46:52.283039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.743 [2024-11-28 06:46:52.283054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.743 [2024-11-28 06:46:52.283067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:41.743 [2024-11-28 06:46:52.283080] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.743 [2024-11-28 06:46:52.283086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.743 [2024-11-28 06:46:52.290931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.743 [2024-11-28 06:46:52.290968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:41.743 [2024-11-28 06:46:52.290985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.743 [2024-11-28 06:46:52.290992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.743 [2024-11-28 06:46:52.294497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.743 [2024-11-28 06:46:52.294527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:41.743 [2024-11-28 06:46:52.294542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.743 [2024-11-28 06:46:52.294549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.743 [2024-11-28 06:46:52.294595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.743 [2024-11-28 06:46:52.294604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:41.743 [2024-11-28 06:46:52.294612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.743 [2024-11-28 06:46:52.294619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.743 [2024-11-28 06:46:52.294641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.743 [2024-11-28 06:46:52.294649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:41.743 [2024-11-28 06:46:52.294656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.743 [2024-11-28 06:46:52.294667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.743 [2024-11-28 06:46:52.294739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.743 [2024-11-28 06:46:52.294749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:41.743 [2024-11-28 06:46:52.294756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.743 [2024-11-28 06:46:52.294764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.743 [2024-11-28 06:46:52.294789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.743 [2024-11-28 06:46:52.294798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:41.743 [2024-11-28 06:46:52.294806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.743 [2024-11-28 06:46:52.294814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.743 [2024-11-28 06:46:52.294849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.743 [2024-11-28 06:46:52.294858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:41.743 [2024-11-28 06:46:52.294866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.743 [2024-11-28 06:46:52.294873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.743 [2024-11-28 06:46:52.294916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.743 [2024-11-28 06:46:52.294925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:41.743 [2024-11-28 06:46:52.294933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.743 [2024-11-28 06:46:52.294943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.743 [2024-11-28 06:46:52.295047] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.100 ms, result 0 00:21:41.743 00:21:41.743 00:21:41.743 06:46:52 -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:21:44.277 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:21:44.277 06:46:54 -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:21:44.277 06:46:54 -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:21:44.277 06:46:54 -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:44.277 06:46:54 -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:44.277 06:46:54 -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:21:44.277 06:46:54 -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:44.277 06:46:54 -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:21:44.277 Process with pid 85063 is not found 00:21:44.277 06:46:54 -- ftl/dirty_shutdown.sh@37 -- # killprocess 85063 00:21:44.277 06:46:54 -- common/autotest_common.sh@936 -- # '[' -z 85063 ']' 00:21:44.277 06:46:54 -- common/autotest_common.sh@940 -- # kill -0 85063 00:21:44.277 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (85063) - No such process 00:21:44.277 06:46:54 -- common/autotest_common.sh@963 -- # echo 'Process with pid 85063 is not found' 00:21:44.277 06:46:54 -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:21:44.546 Remove shared memory files 00:21:44.547 06:46:55 -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:21:44.547 06:46:55 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:44.547 06:46:55 -- ftl/common.sh@205 -- # rm -f rm -f 00:21:44.547 06:46:55 -- ftl/common.sh@206 -- # rm -f rm -f 00:21:44.547 06:46:55 -- ftl/common.sh@207 -- # rm -f rm -f 00:21:44.547 06:46:55 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:44.547 06:46:55 -- ftl/common.sh@209 -- # rm -f rm -f 00:21:44.547 ************************************ 00:21:44.547 END TEST ftl_dirty_shutdown 00:21:44.547 ************************************ 00:21:44.547 00:21:44.547 real 2m48.737s 00:21:44.547 user 3m3.923s 00:21:44.547 sys 0m21.643s 00:21:44.547 06:46:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:44.547 06:46:55 -- common/autotest_common.sh@10 -- # set +x 00:21:44.547 06:46:55 -- ftl/ftl.sh@79 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:21:44.547 06:46:55 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:21:44.547 06:46:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:44.547 06:46:55 -- common/autotest_common.sh@10 -- # set +x 00:21:44.547 ************************************ 00:21:44.547 START TEST ftl_upgrade_shutdown 00:21:44.547 ************************************ 00:21:44.547 06:46:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:21:44.547 * Looking for test storage... 00:21:44.548 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:44.548 06:46:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:44.548 06:46:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:44.548 06:46:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:44.808 06:46:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:44.808 06:46:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:44.808 06:46:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:44.808 06:46:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:44.808 06:46:55 -- scripts/common.sh@335 -- # IFS=.-: 00:21:44.808 06:46:55 -- scripts/common.sh@335 -- # read -ra ver1 00:21:44.808 06:46:55 -- scripts/common.sh@336 -- # IFS=.-: 00:21:44.808 06:46:55 -- scripts/common.sh@336 -- # read -ra ver2 00:21:44.808 06:46:55 -- scripts/common.sh@337 -- # local 'op=<' 00:21:44.808 06:46:55 -- scripts/common.sh@339 -- # ver1_l=2 00:21:44.808 06:46:55 -- scripts/common.sh@340 -- # ver2_l=1 00:21:44.808 06:46:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:44.808 06:46:55 -- scripts/common.sh@343 -- # case "$op" in 00:21:44.808 06:46:55 -- scripts/common.sh@344 -- # : 1 00:21:44.808 06:46:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:44.808 06:46:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:44.808 06:46:55 -- scripts/common.sh@364 -- # decimal 1 00:21:44.808 06:46:55 -- scripts/common.sh@352 -- # local d=1 00:21:44.808 06:46:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:44.808 06:46:55 -- scripts/common.sh@354 -- # echo 1 00:21:44.808 06:46:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:44.808 06:46:55 -- scripts/common.sh@365 -- # decimal 2 00:21:44.808 06:46:55 -- scripts/common.sh@352 -- # local d=2 00:21:44.808 06:46:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:44.808 06:46:55 -- scripts/common.sh@354 -- # echo 2 00:21:44.808 06:46:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:44.809 06:46:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:44.809 06:46:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:44.809 06:46:55 -- scripts/common.sh@367 -- # return 0 00:21:44.809 06:46:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:44.809 06:46:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:44.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.809 --rc genhtml_branch_coverage=1 00:21:44.809 --rc genhtml_function_coverage=1 00:21:44.809 --rc genhtml_legend=1 00:21:44.809 --rc geninfo_all_blocks=1 00:21:44.809 --rc geninfo_unexecuted_blocks=1 00:21:44.809 00:21:44.809 ' 00:21:44.809 06:46:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:44.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.809 --rc genhtml_branch_coverage=1 00:21:44.809 --rc genhtml_function_coverage=1 00:21:44.809 --rc genhtml_legend=1 00:21:44.809 --rc geninfo_all_blocks=1 00:21:44.809 --rc geninfo_unexecuted_blocks=1 00:21:44.809 00:21:44.809 ' 00:21:44.809 06:46:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:44.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.809 --rc genhtml_branch_coverage=1 00:21:44.809 --rc genhtml_function_coverage=1 00:21:44.809 --rc genhtml_legend=1 00:21:44.809 --rc geninfo_all_blocks=1 00:21:44.809 --rc geninfo_unexecuted_blocks=1 00:21:44.809 00:21:44.809 ' 00:21:44.809 06:46:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:44.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.809 --rc genhtml_branch_coverage=1 00:21:44.809 --rc genhtml_function_coverage=1 00:21:44.809 --rc genhtml_legend=1 00:21:44.809 --rc geninfo_all_blocks=1 00:21:44.809 --rc geninfo_unexecuted_blocks=1 00:21:44.809 00:21:44.809 ' 00:21:44.809 06:46:55 -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:44.809 06:46:55 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:21:44.809 06:46:55 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:44.809 06:46:55 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:44.809 06:46:55 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:44.809 06:46:55 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:44.809 06:46:55 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:44.809 06:46:55 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:44.809 06:46:55 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:44.809 06:46:55 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:44.809 06:46:55 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:44.809 06:46:55 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:44.809 06:46:55 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:44.809 06:46:55 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:44.809 06:46:55 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:44.809 06:46:55 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:44.809 06:46:55 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:44.809 06:46:55 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:44.809 06:46:55 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:44.809 06:46:55 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:44.809 06:46:55 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:44.809 06:46:55 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:44.809 06:46:55 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:44.809 06:46:55 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:44.809 06:46:55 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:44.809 06:46:55 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:44.809 06:46:55 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:44.809 06:46:55 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:44.809 06:46:55 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:44.809 06:46:55 -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:44.809 06:46:55 -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:21:44.809 06:46:55 -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:21:44.809 06:46:55 -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:07.0 00:21:44.809 06:46:55 -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:07.0 00:21:44.809 06:46:55 -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:21:44.809 06:46:55 -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:21:44.809 06:46:55 -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:06.0 00:21:44.809 06:46:55 -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:06.0 00:21:44.809 06:46:55 -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:21:44.809 06:46:55 -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:21:44.809 06:46:55 -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:21:44.809 06:46:55 -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:21:44.809 06:46:55 -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:21:44.809 06:46:55 -- ftl/common.sh@81 -- # local base_bdev= 00:21:44.809 06:46:55 -- ftl/common.sh@82 -- # local cache_bdev= 00:21:44.809 06:46:55 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:21:44.809 06:46:55 -- ftl/common.sh@89 -- # spdk_tgt_pid=86975 00:21:44.809 06:46:55 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:21:44.809 06:46:55 -- ftl/common.sh@91 -- # waitforlisten 86975 00:21:44.809 06:46:55 -- common/autotest_common.sh@829 -- # '[' -z 86975 ']' 00:21:44.809 06:46:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.809 06:46:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:44.809 06:46:55 -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:21:44.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.809 06:46:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.809 06:46:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:44.809 06:46:55 -- common/autotest_common.sh@10 -- # set +x 00:21:44.809 [2024-11-28 06:46:55.420958] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:44.809 [2024-11-28 06:46:55.421070] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86975 ] 00:21:44.809 [2024-11-28 06:46:55.549214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.068 [2024-11-28 06:46:55.580169] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:45.068 [2024-11-28 06:46:55.580376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.635 06:46:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:45.635 06:46:56 -- common/autotest_common.sh@862 -- # return 0 00:21:45.635 06:46:56 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:21:45.635 06:46:56 -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:21:45.635 06:46:56 -- ftl/common.sh@99 -- # local params 00:21:45.635 06:46:56 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:21:45.635 06:46:56 -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:21:45.635 06:46:56 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:21:45.635 06:46:56 -- ftl/common.sh@101 -- # [[ -z 0000:00:07.0 ]] 00:21:45.635 06:46:56 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:21:45.635 06:46:56 -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:21:45.635 06:46:56 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:21:45.635 06:46:56 -- ftl/common.sh@101 -- # [[ -z 0000:00:06.0 ]] 00:21:45.635 06:46:56 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:21:45.635 06:46:56 -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:21:45.635 06:46:56 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:21:45.635 06:46:56 -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:21:45.635 06:46:56 -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:07.0 20480 00:21:45.635 06:46:56 -- ftl/common.sh@54 -- # local name=base 00:21:45.635 06:46:56 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:21:45.635 06:46:56 -- ftl/common.sh@56 -- # local size=20480 00:21:45.635 06:46:56 -- ftl/common.sh@59 -- # local base_bdev 00:21:45.635 06:46:56 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:07.0 00:21:45.894 06:46:56 -- ftl/common.sh@60 -- # base_bdev=basen1 00:21:45.894 06:46:56 -- ftl/common.sh@62 -- # local base_size 00:21:45.894 06:46:56 -- ftl/common.sh@63 -- # get_bdev_size basen1 00:21:45.894 06:46:56 -- common/autotest_common.sh@1367 -- # local bdev_name=basen1 00:21:45.894 06:46:56 -- common/autotest_common.sh@1368 -- # local bdev_info 00:21:45.894 06:46:56 -- common/autotest_common.sh@1369 -- # local bs 00:21:45.894 06:46:56 -- common/autotest_common.sh@1370 -- # local nb 00:21:45.894 06:46:56 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:21:46.153 06:46:56 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:21:46.153 { 00:21:46.153 "name": "basen1", 00:21:46.153 "aliases": [ 00:21:46.153 "b581f9df-218d-4962-9c32-fe097c926542" 00:21:46.153 ], 00:21:46.153 "product_name": "NVMe disk", 00:21:46.153 "block_size": 4096, 00:21:46.153 "num_blocks": 1310720, 00:21:46.153 "uuid": "b581f9df-218d-4962-9c32-fe097c926542", 00:21:46.153 "assigned_rate_limits": { 00:21:46.153 "rw_ios_per_sec": 0, 00:21:46.153 "rw_mbytes_per_sec": 0, 00:21:46.153 "r_mbytes_per_sec": 0, 00:21:46.153 "w_mbytes_per_sec": 0 00:21:46.153 }, 00:21:46.153 "claimed": true, 00:21:46.153 "claim_type": "read_many_write_one", 00:21:46.153 "zoned": false, 00:21:46.153 "supported_io_types": { 00:21:46.153 "read": true, 00:21:46.153 "write": true, 00:21:46.153 "unmap": true, 00:21:46.153 "write_zeroes": true, 00:21:46.153 "flush": true, 00:21:46.153 "reset": true, 00:21:46.153 "compare": true, 00:21:46.153 "compare_and_write": false, 00:21:46.153 "abort": true, 00:21:46.153 "nvme_admin": true, 00:21:46.153 "nvme_io": true 00:21:46.153 }, 00:21:46.153 "driver_specific": { 00:21:46.153 "nvme": [ 00:21:46.153 { 00:21:46.153 "pci_address": "0000:00:07.0", 00:21:46.153 "trid": { 00:21:46.153 "trtype": "PCIe", 00:21:46.154 "traddr": "0000:00:07.0" 00:21:46.154 }, 00:21:46.154 "ctrlr_data": { 00:21:46.154 "cntlid": 0, 00:21:46.154 "vendor_id": "0x1b36", 00:21:46.154 "model_number": "QEMU NVMe Ctrl", 00:21:46.154 "serial_number": "12341", 00:21:46.154 "firmware_revision": "8.0.0", 00:21:46.154 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:46.154 "oacs": { 00:21:46.154 "security": 0, 00:21:46.154 "format": 1, 00:21:46.154 "firmware": 0, 00:21:46.154 "ns_manage": 1 00:21:46.154 }, 00:21:46.154 "multi_ctrlr": false, 00:21:46.154 "ana_reporting": false 00:21:46.154 }, 00:21:46.154 "vs": { 00:21:46.154 "nvme_version": "1.4" 00:21:46.154 }, 00:21:46.154 "ns_data": { 00:21:46.154 "id": 1, 00:21:46.154 "can_share": false 00:21:46.154 } 00:21:46.154 } 00:21:46.154 ], 00:21:46.154 "mp_policy": "active_passive" 00:21:46.154 } 00:21:46.154 } 00:21:46.154 ]' 00:21:46.154 06:46:56 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:21:46.154 06:46:56 -- common/autotest_common.sh@1372 -- # bs=4096 00:21:46.154 06:46:56 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:21:46.154 06:46:56 -- common/autotest_common.sh@1373 -- # nb=1310720 00:21:46.154 06:46:56 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:21:46.154 06:46:56 -- common/autotest_common.sh@1377 -- # echo 5120 00:21:46.154 06:46:56 -- ftl/common.sh@63 -- # base_size=5120 00:21:46.154 06:46:56 -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:21:46.154 06:46:56 -- ftl/common.sh@67 -- # clear_lvols 00:21:46.154 06:46:56 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:46.154 06:46:56 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:46.412 06:46:57 -- ftl/common.sh@28 -- # stores=5fd135b4-266b-4e37-ac15-4f127906ec97 00:21:46.412 06:46:57 -- ftl/common.sh@29 -- # for lvs in $stores 00:21:46.412 06:46:57 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5fd135b4-266b-4e37-ac15-4f127906ec97 00:21:46.670 06:46:57 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:21:46.929 06:46:57 -- ftl/common.sh@68 -- # lvs=24fc45f9-de1c-483e-ad47-3053d89879aa 00:21:46.929 06:46:57 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 24fc45f9-de1c-483e-ad47-3053d89879aa 00:21:46.929 06:46:57 -- ftl/common.sh@107 -- # base_bdev=1d8de08d-4d29-4c32-bf35-1790f65e9f77 00:21:46.929 06:46:57 -- ftl/common.sh@108 -- # [[ -z 1d8de08d-4d29-4c32-bf35-1790f65e9f77 ]] 00:21:46.929 06:46:57 -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:06.0 1d8de08d-4d29-4c32-bf35-1790f65e9f77 5120 00:21:46.929 06:46:57 -- ftl/common.sh@35 -- # local name=cache 00:21:46.929 06:46:57 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:21:46.929 06:46:57 -- ftl/common.sh@37 -- # local base_bdev=1d8de08d-4d29-4c32-bf35-1790f65e9f77 00:21:46.929 06:46:57 -- ftl/common.sh@38 -- # local cache_size=5120 00:21:46.929 06:46:57 -- ftl/common.sh@41 -- # get_bdev_size 1d8de08d-4d29-4c32-bf35-1790f65e9f77 00:21:46.929 06:46:57 -- common/autotest_common.sh@1367 -- # local bdev_name=1d8de08d-4d29-4c32-bf35-1790f65e9f77 00:21:46.929 06:46:57 -- common/autotest_common.sh@1368 -- # local bdev_info 00:21:46.929 06:46:57 -- common/autotest_common.sh@1369 -- # local bs 00:21:46.929 06:46:57 -- common/autotest_common.sh@1370 -- # local nb 00:21:46.929 06:46:57 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1d8de08d-4d29-4c32-bf35-1790f65e9f77 00:21:47.186 06:46:57 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:21:47.186 { 00:21:47.186 "name": "1d8de08d-4d29-4c32-bf35-1790f65e9f77", 00:21:47.186 "aliases": [ 00:21:47.186 "lvs/basen1p0" 00:21:47.186 ], 00:21:47.186 "product_name": "Logical Volume", 00:21:47.186 "block_size": 4096, 00:21:47.186 "num_blocks": 5242880, 00:21:47.187 "uuid": "1d8de08d-4d29-4c32-bf35-1790f65e9f77", 00:21:47.187 "assigned_rate_limits": { 00:21:47.187 "rw_ios_per_sec": 0, 00:21:47.187 "rw_mbytes_per_sec": 0, 00:21:47.187 "r_mbytes_per_sec": 0, 00:21:47.187 "w_mbytes_per_sec": 0 00:21:47.187 }, 00:21:47.187 "claimed": false, 00:21:47.187 "zoned": false, 00:21:47.187 "supported_io_types": { 00:21:47.187 "read": true, 00:21:47.187 "write": true, 00:21:47.187 "unmap": true, 00:21:47.187 "write_zeroes": true, 00:21:47.187 "flush": false, 00:21:47.187 "reset": true, 00:21:47.187 "compare": false, 00:21:47.187 "compare_and_write": false, 00:21:47.187 "abort": false, 00:21:47.187 "nvme_admin": false, 00:21:47.187 "nvme_io": false 00:21:47.187 }, 00:21:47.187 "driver_specific": { 00:21:47.187 "lvol": { 00:21:47.187 "lvol_store_uuid": "24fc45f9-de1c-483e-ad47-3053d89879aa", 00:21:47.187 "base_bdev": "basen1", 00:21:47.187 "thin_provision": true, 00:21:47.187 "snapshot": false, 00:21:47.187 "clone": false, 00:21:47.187 "esnap_clone": false 00:21:47.187 } 00:21:47.187 } 00:21:47.187 } 00:21:47.187 ]' 00:21:47.187 06:46:57 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:21:47.187 06:46:57 -- common/autotest_common.sh@1372 -- # bs=4096 00:21:47.187 06:46:57 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:21:47.187 06:46:57 -- common/autotest_common.sh@1373 -- # nb=5242880 00:21:47.187 06:46:57 -- common/autotest_common.sh@1376 -- # bdev_size=20480 00:21:47.187 06:46:57 -- common/autotest_common.sh@1377 -- # echo 20480 00:21:47.187 06:46:57 -- ftl/common.sh@41 -- # local base_size=1024 00:21:47.187 06:46:57 -- ftl/common.sh@44 -- # local nvc_bdev 00:21:47.187 06:46:57 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:06.0 00:21:47.445 06:46:58 -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:21:47.445 06:46:58 -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:21:47.445 06:46:58 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:21:47.703 06:46:58 -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:21:47.703 06:46:58 -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:21:47.703 06:46:58 -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 1d8de08d-4d29-4c32-bf35-1790f65e9f77 -c cachen1p0 --l2p_dram_limit 2 00:21:47.703 [2024-11-28 06:46:58.457758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:47.703 [2024-11-28 06:46:58.457800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:21:47.703 [2024-11-28 06:46:58.457818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:21:47.703 [2024-11-28 06:46:58.457827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:47.703 [2024-11-28 06:46:58.457881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:47.703 [2024-11-28 06:46:58.457891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:21:47.703 [2024-11-28 06:46:58.457903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:21:47.703 [2024-11-28 06:46:58.457910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:47.703 [2024-11-28 06:46:58.457934] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:21:47.703 [2024-11-28 06:46:58.458174] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:21:47.703 [2024-11-28 06:46:58.458190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:47.703 [2024-11-28 06:46:58.458201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:21:47.703 [2024-11-28 06:46:58.458211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.264 ms 00:21:47.703 [2024-11-28 06:46:58.458218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:47.703 [2024-11-28 06:46:58.458246] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID ae285c28-66e5-4575-8dfb-6dd67c3fe0fb 00:21:47.703 [2024-11-28 06:46:58.459329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:47.703 [2024-11-28 06:46:58.459358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:21:47.703 [2024-11-28 06:46:58.459369] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:21:47.703 [2024-11-28 06:46:58.459379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:47.703 [2024-11-28 06:46:58.464640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:47.703 [2024-11-28 06:46:58.464677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:21:47.703 [2024-11-28 06:46:58.464690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.189 ms 00:21:47.703 [2024-11-28 06:46:58.464718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:47.703 [2024-11-28 06:46:58.464757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:47.703 [2024-11-28 06:46:58.464768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:21:47.703 [2024-11-28 06:46:58.464779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:21:47.703 [2024-11-28 06:46:58.464791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:47.703 [2024-11-28 06:46:58.464834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:47.703 [2024-11-28 06:46:58.464845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:21:47.703 [2024-11-28 06:46:58.464856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:21:47.703 [2024-11-28 06:46:58.464864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:47.703 [2024-11-28 06:46:58.464891] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:21:47.703 [2024-11-28 06:46:58.466337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:47.703 [2024-11-28 06:46:58.466365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:21:47.703 [2024-11-28 06:46:58.466376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.451 ms 00:21:47.703 [2024-11-28 06:46:58.466384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:47.703 [2024-11-28 06:46:58.466417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:47.703 [2024-11-28 06:46:58.466426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:21:47.703 [2024-11-28 06:46:58.466443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:21:47.703 [2024-11-28 06:46:58.466449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:47.703 [2024-11-28 06:46:58.466474] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:21:47.703 [2024-11-28 06:46:58.466596] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:21:47.703 [2024-11-28 06:46:58.466610] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:21:47.703 [2024-11-28 06:46:58.466623] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:21:47.703 [2024-11-28 06:46:58.466635] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:21:47.703 [2024-11-28 06:46:58.466644] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:21:47.703 [2024-11-28 06:46:58.466653] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:21:47.703 [2024-11-28 06:46:58.466663] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:21:47.703 [2024-11-28 06:46:58.466677] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:21:47.703 [2024-11-28 06:46:58.466684] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:21:47.703 [2024-11-28 06:46:58.466693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:47.703 [2024-11-28 06:46:58.466700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:21:47.703 [2024-11-28 06:46:58.466731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.220 ms 00:21:47.703 [2024-11-28 06:46:58.466738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:47.703 [2024-11-28 06:46:58.466806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:47.703 [2024-11-28 06:46:58.466815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:21:47.703 [2024-11-28 06:46:58.466825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:21:47.703 [2024-11-28 06:46:58.466831] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:47.703 [2024-11-28 06:46:58.466903] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:21:47.703 [2024-11-28 06:46:58.466913] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:21:47.704 [2024-11-28 06:46:58.466923] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:21:47.704 [2024-11-28 06:46:58.466930] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:47.704 [2024-11-28 06:46:58.466939] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:21:47.704 [2024-11-28 06:46:58.466947] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:21:47.704 [2024-11-28 06:46:58.466955] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:21:47.704 [2024-11-28 06:46:58.466962] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:21:47.704 [2024-11-28 06:46:58.466971] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:21:47.704 [2024-11-28 06:46:58.466977] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:47.704 [2024-11-28 06:46:58.466987] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:21:47.704 [2024-11-28 06:46:58.466996] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:21:47.704 [2024-11-28 06:46:58.467007] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:47.704 [2024-11-28 06:46:58.467015] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:21:47.704 [2024-11-28 06:46:58.467025] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:21:47.704 [2024-11-28 06:46:58.467033] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:47.704 [2024-11-28 06:46:58.467043] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:21:47.704 [2024-11-28 06:46:58.467051] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:21:47.704 [2024-11-28 06:46:58.467060] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:47.704 [2024-11-28 06:46:58.467068] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:21:47.704 [2024-11-28 06:46:58.467077] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:21:47.704 [2024-11-28 06:46:58.467084] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:21:47.704 [2024-11-28 06:46:58.467096] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:21:47.704 [2024-11-28 06:46:58.467103] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:21:47.704 [2024-11-28 06:46:58.467112] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:21:47.704 [2024-11-28 06:46:58.467120] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:21:47.704 [2024-11-28 06:46:58.467129] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:21:47.704 [2024-11-28 06:46:58.467136] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:21:47.704 [2024-11-28 06:46:58.467147] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:21:47.704 [2024-11-28 06:46:58.467154] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:21:47.704 [2024-11-28 06:46:58.467163] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:21:47.704 [2024-11-28 06:46:58.467171] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:21:47.704 [2024-11-28 06:46:58.467180] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:21:47.704 [2024-11-28 06:46:58.467187] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:21:47.704 [2024-11-28 06:46:58.467196] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:21:47.704 [2024-11-28 06:46:58.467205] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:21:47.704 [2024-11-28 06:46:58.467214] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:47.704 [2024-11-28 06:46:58.467221] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:21:47.704 [2024-11-28 06:46:58.467230] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:21:47.704 [2024-11-28 06:46:58.467237] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:47.704 [2024-11-28 06:46:58.467247] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:21:47.704 [2024-11-28 06:46:58.467256] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:21:47.704 [2024-11-28 06:46:58.467266] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:21:47.704 [2024-11-28 06:46:58.467273] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:47.704 [2024-11-28 06:46:58.467286] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:21:47.704 [2024-11-28 06:46:58.467293] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:21:47.704 [2024-11-28 06:46:58.467304] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:21:47.704 [2024-11-28 06:46:58.467312] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:21:47.704 [2024-11-28 06:46:58.467323] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:21:47.704 [2024-11-28 06:46:58.467330] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:21:47.704 [2024-11-28 06:46:58.467341] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:21:47.704 [2024-11-28 06:46:58.467358] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:47.704 [2024-11-28 06:46:58.467368] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:21:47.704 [2024-11-28 06:46:58.467375] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:21:47.704 [2024-11-28 06:46:58.467384] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:21:47.704 [2024-11-28 06:46:58.467391] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:21:47.704 [2024-11-28 06:46:58.467400] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:21:47.704 [2024-11-28 06:46:58.467407] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:21:47.704 [2024-11-28 06:46:58.467416] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:21:47.704 [2024-11-28 06:46:58.467422] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:21:47.704 [2024-11-28 06:46:58.467432] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:21:47.704 [2024-11-28 06:46:58.467439] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:21:47.704 [2024-11-28 06:46:58.467448] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:21:47.704 [2024-11-28 06:46:58.467455] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:21:47.704 [2024-11-28 06:46:58.467464] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:21:47.704 [2024-11-28 06:46:58.467470] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:21:47.704 [2024-11-28 06:46:58.467480] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:47.704 [2024-11-28 06:46:58.467488] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:47.704 [2024-11-28 06:46:58.467497] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:21:47.704 [2024-11-28 06:46:58.467503] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:21:47.704 [2024-11-28 06:46:58.467512] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:21:47.704 [2024-11-28 06:46:58.467519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:47.704 [2024-11-28 06:46:58.467530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:21:47.704 [2024-11-28 06:46:58.467539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.661 ms 00:21:47.704 [2024-11-28 06:46:58.467549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:47.963 [2024-11-28 06:46:58.473354] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:47.963 [2024-11-28 06:46:58.473391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:21:47.963 [2024-11-28 06:46:58.473401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.762 ms 00:21:47.963 [2024-11-28 06:46:58.473415] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:47.963 [2024-11-28 06:46:58.473453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:47.963 [2024-11-28 06:46:58.473464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:21:47.963 [2024-11-28 06:46:58.473472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:21:47.963 [2024-11-28 06:46:58.473480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:47.963 [2024-11-28 06:46:58.482192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:47.963 [2024-11-28 06:46:58.482226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:21:47.963 [2024-11-28 06:46:58.482236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.680 ms 00:21:47.963 [2024-11-28 06:46:58.482248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:47.963 [2024-11-28 06:46:58.482273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:47.963 [2024-11-28 06:46:58.482282] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:21:47.963 [2024-11-28 06:46:58.482291] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:21:47.963 [2024-11-28 06:46:58.482299] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:47.963 [2024-11-28 06:46:58.482629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:47.963 [2024-11-28 06:46:58.482664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:21:47.963 [2024-11-28 06:46:58.482672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.285 ms 00:21:47.963 [2024-11-28 06:46:58.482682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:47.963 [2024-11-28 06:46:58.482821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:47.963 [2024-11-28 06:46:58.482926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:21:47.963 [2024-11-28 06:46:58.482953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:21:47.963 [2024-11-28 06:46:58.482964] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:47.963 [2024-11-28 06:46:58.488529] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:47.963 [2024-11-28 06:46:58.488635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:21:47.963 [2024-11-28 06:46:58.488684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.543 ms 00:21:47.963 [2024-11-28 06:46:58.488720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:47.963 [2024-11-28 06:46:58.496997] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:21:47.963 [2024-11-28 06:46:58.497910] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:47.963 [2024-11-28 06:46:58.498005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:21:47.963 [2024-11-28 06:46:58.498058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.115 ms 00:21:47.963 [2024-11-28 06:46:58.498083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:47.963 [2024-11-28 06:46:58.514742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:47.963 [2024-11-28 06:46:58.514853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:21:47.963 [2024-11-28 06:46:58.514906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.616 ms 00:21:47.963 [2024-11-28 06:46:58.514928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:47.963 [2024-11-28 06:46:58.514988] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] First startup needs to scrub nv cache data region, this may take some time. 00:21:47.963 [2024-11-28 06:46:58.515025] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 4GiB 00:21:50.494 [2024-11-28 06:47:01.214430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:50.494 [2024-11-28 06:47:01.214932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:21:50.494 [2024-11-28 06:47:01.215162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2699.413 ms 00:21:50.494 [2024-11-28 06:47:01.215240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:50.494 [2024-11-28 06:47:01.215588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:50.494 [2024-11-28 06:47:01.215672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:21:50.494 [2024-11-28 06:47:01.215861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.133 ms 00:21:50.494 [2024-11-28 06:47:01.215935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:50.494 [2024-11-28 06:47:01.221928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:50.494 [2024-11-28 06:47:01.222219] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:21:50.494 [2024-11-28 06:47:01.222415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.848 ms 00:21:50.494 [2024-11-28 06:47:01.222450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:50.494 [2024-11-28 06:47:01.225963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:50.494 [2024-11-28 06:47:01.225994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:21:50.494 [2024-11-28 06:47:01.226005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.375 ms 00:21:50.494 [2024-11-28 06:47:01.226011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:50.494 [2024-11-28 06:47:01.226202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:50.494 [2024-11-28 06:47:01.226212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:21:50.494 [2024-11-28 06:47:01.226223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.158 ms 00:21:50.494 [2024-11-28 06:47:01.226230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:50.494 [2024-11-28 06:47:01.253416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:50.494 [2024-11-28 06:47:01.253570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:21:50.494 [2024-11-28 06:47:01.253592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 27.162 ms 00:21:50.494 [2024-11-28 06:47:01.253601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:50.494 [2024-11-28 06:47:01.257445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:50.494 [2024-11-28 06:47:01.257480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:21:50.494 [2024-11-28 06:47:01.257496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.804 ms 00:21:50.494 [2024-11-28 06:47:01.257505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:50.494 [2024-11-28 06:47:01.258765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:50.494 [2024-11-28 06:47:01.258797] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:21:50.494 [2024-11-28 06:47:01.258808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.221 ms 00:21:50.494 [2024-11-28 06:47:01.258816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:50.494 [2024-11-28 06:47:01.262998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:50.494 [2024-11-28 06:47:01.263030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:21:50.494 [2024-11-28 06:47:01.263042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.158 ms 00:21:50.494 [2024-11-28 06:47:01.263049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:50.494 [2024-11-28 06:47:01.263087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:50.494 [2024-11-28 06:47:01.263097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:21:50.494 [2024-11-28 06:47:01.263109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:21:50.494 [2024-11-28 06:47:01.263117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:50.494 [2024-11-28 06:47:01.263182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:50.494 [2024-11-28 06:47:01.263192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:21:50.494 [2024-11-28 06:47:01.263254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:21:50.494 [2024-11-28 06:47:01.263263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:50.753 [2024-11-28 06:47:01.264113] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2805.955 ms, result 0 00:21:50.753 { 00:21:50.753 "name": "ftl", 00:21:50.753 "uuid": "ae285c28-66e5-4575-8dfb-6dd67c3fe0fb" 00:21:50.753 } 00:21:50.753 06:47:01 -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:21:50.753 [2024-11-28 06:47:01.464049] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.753 06:47:01 -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:21:51.012 06:47:01 -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:21:51.271 [2024-11-28 06:47:01.840459] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:21:51.271 06:47:01 -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:21:51.271 [2024-11-28 06:47:02.028824] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:51.529 06:47:02 -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:21:51.788 Fill FTL, iteration 1 00:21:51.788 06:47:02 -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:21:51.788 06:47:02 -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:21:51.788 06:47:02 -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:21:51.788 06:47:02 -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:21:51.788 06:47:02 -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:21:51.788 06:47:02 -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:21:51.788 06:47:02 -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:21:51.788 06:47:02 -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:21:51.788 06:47:02 -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:21:51.788 06:47:02 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:21:51.788 06:47:02 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:21:51.788 06:47:02 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:21:51.788 06:47:02 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:21:51.788 06:47:02 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:21:51.788 06:47:02 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:21:51.788 06:47:02 -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:21:51.788 06:47:02 -- ftl/common.sh@163 -- # spdk_ini_pid=87080 00:21:51.788 06:47:02 -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:21:51.788 06:47:02 -- ftl/common.sh@164 -- # export spdk_ini_pid 00:21:51.788 06:47:02 -- ftl/common.sh@165 -- # waitforlisten 87080 /var/tmp/spdk.tgt.sock 00:21:51.788 06:47:02 -- common/autotest_common.sh@829 -- # '[' -z 87080 ']' 00:21:51.788 06:47:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:21:51.788 06:47:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:51.788 06:47:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:21:51.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:21:51.788 06:47:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:51.788 06:47:02 -- common/autotest_common.sh@10 -- # set +x 00:21:51.788 [2024-11-28 06:47:02.412922] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:51.788 [2024-11-28 06:47:02.413364] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87080 ] 00:21:51.788 [2024-11-28 06:47:02.548035] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.047 [2024-11-28 06:47:02.577787] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:52.047 [2024-11-28 06:47:02.578070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.614 06:47:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:52.614 06:47:03 -- common/autotest_common.sh@862 -- # return 0 00:21:52.614 06:47:03 -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:21:52.873 ftln1 00:21:52.873 06:47:03 -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:21:52.873 06:47:03 -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:21:53.131 06:47:03 -- ftl/common.sh@173 -- # echo ']}' 00:21:53.131 06:47:03 -- ftl/common.sh@176 -- # killprocess 87080 00:21:53.131 06:47:03 -- common/autotest_common.sh@936 -- # '[' -z 87080 ']' 00:21:53.131 06:47:03 -- common/autotest_common.sh@940 -- # kill -0 87080 00:21:53.131 06:47:03 -- common/autotest_common.sh@941 -- # uname 00:21:53.131 06:47:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:53.131 06:47:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87080 00:21:53.131 killing process with pid 87080 00:21:53.131 06:47:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:53.131 06:47:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:53.131 06:47:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87080' 00:21:53.131 06:47:03 -- common/autotest_common.sh@955 -- # kill 87080 00:21:53.131 06:47:03 -- common/autotest_common.sh@960 -- # wait 87080 00:21:53.390 06:47:03 -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:21:53.391 06:47:03 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:21:53.391 [2024-11-28 06:47:03.969033] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:53.391 [2024-11-28 06:47:03.969134] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87117 ] 00:21:53.391 [2024-11-28 06:47:04.105068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.391 [2024-11-28 06:47:04.134186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.814  [2024-11-28T06:47:06.519Z] Copying: 212/1024 [MB] (212 MBps) [2024-11-28T06:47:07.454Z] Copying: 457/1024 [MB] (245 MBps) [2024-11-28T06:47:08.388Z] Copying: 712/1024 [MB] (255 MBps) [2024-11-28T06:47:08.646Z] Copying: 973/1024 [MB] (261 MBps) [2024-11-28T06:47:08.904Z] Copying: 1024/1024 [MB] (average 243 MBps) 00:21:58.134 00:21:58.134 06:47:08 -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:21:58.134 06:47:08 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:21:58.134 Calculate MD5 checksum, iteration 1 00:21:58.134 06:47:08 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:21:58.134 06:47:08 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:21:58.134 06:47:08 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:21:58.134 06:47:08 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:21:58.134 06:47:08 -- ftl/common.sh@154 -- # return 0 00:21:58.134 06:47:08 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:21:58.134 [2024-11-28 06:47:08.733016] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:58.134 [2024-11-28 06:47:08.733120] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87172 ] 00:21:58.134 [2024-11-28 06:47:08.866977] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.134 [2024-11-28 06:47:08.893866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.510  [2024-11-28T06:47:10.847Z] Copying: 664/1024 [MB] (664 MBps) [2024-11-28T06:47:10.847Z] Copying: 1024/1024 [MB] (average 659 MBps) 00:22:00.077 00:22:00.077 06:47:10 -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:22:00.077 06:47:10 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:02.611 06:47:12 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:22:02.611 Fill FTL, iteration 2 00:22:02.611 06:47:12 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=acd28112a895f2b8aab58130863ef640 00:22:02.611 06:47:12 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:22:02.611 06:47:12 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:22:02.611 06:47:12 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:22:02.611 06:47:12 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:22:02.611 06:47:12 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:02.611 06:47:12 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:02.611 06:47:12 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:02.611 06:47:12 -- ftl/common.sh@154 -- # return 0 00:22:02.611 06:47:12 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:22:02.611 [2024-11-28 06:47:12.886989] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:02.611 [2024-11-28 06:47:12.887066] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87223 ] 00:22:02.611 [2024-11-28 06:47:13.015540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.611 [2024-11-28 06:47:13.042440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.545  [2024-11-28T06:47:15.252Z] Copying: 254/1024 [MB] (254 MBps) [2024-11-28T06:47:16.623Z] Copying: 514/1024 [MB] (260 MBps) [2024-11-28T06:47:17.189Z] Copying: 777/1024 [MB] (263 MBps) [2024-11-28T06:47:17.447Z] Copying: 1024/1024 [MB] (average 258 MBps) 00:22:06.677 00:22:06.677 Calculate MD5 checksum, iteration 2 00:22:06.677 06:47:17 -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:22:06.677 06:47:17 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:22:06.677 06:47:17 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:22:06.677 06:47:17 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:06.677 06:47:17 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:06.677 06:47:17 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:06.677 06:47:17 -- ftl/common.sh@154 -- # return 0 00:22:06.677 06:47:17 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:22:06.677 [2024-11-28 06:47:17.385621] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:06.678 [2024-11-28 06:47:17.385745] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87271 ] 00:22:06.936 [2024-11-28 06:47:17.518828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.936 [2024-11-28 06:47:17.545624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.307  [2024-11-28T06:47:19.659Z] Copying: 701/1024 [MB] (701 MBps) [2024-11-28T06:47:22.954Z] Copying: 1024/1024 [MB] (average 673 MBps) 00:22:12.184 00:22:12.184 06:47:22 -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:22:12.184 06:47:22 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:14.083 06:47:24 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:22:14.083 06:47:24 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=3ee61297e1defd23e699ec8aa787e3ad 00:22:14.084 06:47:24 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:22:14.084 06:47:24 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:22:14.084 06:47:24 -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:22:14.084 [2024-11-28 06:47:24.663152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:14.084 [2024-11-28 06:47:24.663195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:22:14.084 [2024-11-28 06:47:24.663206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:22:14.084 [2024-11-28 06:47:24.663213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:14.084 [2024-11-28 06:47:24.663238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:14.084 [2024-11-28 06:47:24.663247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:22:14.084 [2024-11-28 06:47:24.663254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:22:14.084 [2024-11-28 06:47:24.663261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:14.084 [2024-11-28 06:47:24.663277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:14.084 [2024-11-28 06:47:24.663284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:22:14.084 [2024-11-28 06:47:24.663292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:14.084 [2024-11-28 06:47:24.663298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:14.084 [2024-11-28 06:47:24.663349] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.187 ms, result 0 00:22:14.084 true 00:22:14.084 06:47:24 -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:14.343 { 00:22:14.343 "name": "ftl", 00:22:14.343 "properties": [ 00:22:14.343 { 00:22:14.343 "name": "superblock_version", 00:22:14.343 "value": 5, 00:22:14.343 "read-only": true 00:22:14.343 }, 00:22:14.343 { 00:22:14.343 "name": "base_device", 00:22:14.343 "bands": [ 00:22:14.343 { 00:22:14.343 "id": 0, 00:22:14.343 "state": "FREE", 00:22:14.343 "validity": 0.0 00:22:14.343 }, 00:22:14.343 { 00:22:14.343 "id": 1, 00:22:14.343 "state": "FREE", 00:22:14.343 "validity": 0.0 00:22:14.343 }, 00:22:14.343 { 00:22:14.343 "id": 2, 00:22:14.343 "state": "FREE", 00:22:14.343 "validity": 0.0 00:22:14.343 }, 00:22:14.343 { 00:22:14.343 "id": 3, 00:22:14.343 "state": "FREE", 00:22:14.343 "validity": 0.0 00:22:14.343 }, 00:22:14.343 { 00:22:14.343 "id": 4, 00:22:14.343 "state": "FREE", 00:22:14.343 "validity": 0.0 00:22:14.343 }, 00:22:14.343 { 00:22:14.343 "id": 5, 00:22:14.343 "state": "FREE", 00:22:14.343 "validity": 0.0 00:22:14.343 }, 00:22:14.343 { 00:22:14.343 "id": 6, 00:22:14.343 "state": "FREE", 00:22:14.343 "validity": 0.0 00:22:14.343 }, 00:22:14.343 { 00:22:14.343 "id": 7, 00:22:14.343 "state": "FREE", 00:22:14.343 "validity": 0.0 00:22:14.343 }, 00:22:14.343 { 00:22:14.343 "id": 8, 00:22:14.343 "state": "FREE", 00:22:14.343 "validity": 0.0 00:22:14.343 }, 00:22:14.343 { 00:22:14.343 "id": 9, 00:22:14.343 "state": "FREE", 00:22:14.343 "validity": 0.0 00:22:14.343 }, 00:22:14.343 { 00:22:14.343 "id": 10, 00:22:14.343 "state": "FREE", 00:22:14.343 "validity": 0.0 00:22:14.343 }, 00:22:14.343 { 00:22:14.343 "id": 11, 00:22:14.343 "state": "FREE", 00:22:14.343 "validity": 0.0 00:22:14.343 }, 00:22:14.343 { 00:22:14.343 "id": 12, 00:22:14.343 "state": "FREE", 00:22:14.343 "validity": 0.0 00:22:14.343 }, 00:22:14.343 { 00:22:14.343 "id": 13, 00:22:14.343 "state": "FREE", 00:22:14.343 "validity": 0.0 00:22:14.343 }, 00:22:14.343 { 00:22:14.343 "id": 14, 00:22:14.343 "state": "FREE", 00:22:14.343 "validity": 0.0 00:22:14.343 }, 00:22:14.343 { 00:22:14.343 "id": 15, 00:22:14.343 "state": "FREE", 00:22:14.343 "validity": 0.0 00:22:14.343 }, 00:22:14.343 { 00:22:14.343 "id": 16, 00:22:14.343 "state": "FREE", 00:22:14.343 "validity": 0.0 00:22:14.343 }, 00:22:14.343 { 00:22:14.343 "id": 17, 00:22:14.343 "state": "FREE", 00:22:14.343 "validity": 0.0 00:22:14.343 } 00:22:14.343 ], 00:22:14.343 "read-only": true 00:22:14.343 }, 00:22:14.343 { 00:22:14.343 "name": "cache_device", 00:22:14.343 "type": "bdev", 00:22:14.343 "chunks": [ 00:22:14.343 { 00:22:14.343 "id": 0, 00:22:14.343 "state": "CLOSED", 00:22:14.343 "utilization": 1.0 00:22:14.343 }, 00:22:14.343 { 00:22:14.343 "id": 1, 00:22:14.343 "state": "CLOSED", 00:22:14.343 "utilization": 1.0 00:22:14.343 }, 00:22:14.343 { 00:22:14.343 "id": 2, 00:22:14.343 "state": "OPEN", 00:22:14.343 "utilization": 0.001953125 00:22:14.343 }, 00:22:14.343 { 00:22:14.343 "id": 3, 00:22:14.343 "state": "OPEN", 00:22:14.343 "utilization": 0.0 00:22:14.343 } 00:22:14.343 ], 00:22:14.343 "read-only": true 00:22:14.343 }, 00:22:14.343 { 00:22:14.343 "name": "verbose_mode", 00:22:14.343 "value": true, 00:22:14.343 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:22:14.343 }, 00:22:14.343 { 00:22:14.343 "name": "prep_upgrade_on_shutdown", 00:22:14.343 "value": false, 00:22:14.343 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:22:14.343 } 00:22:14.343 ] 00:22:14.343 } 00:22:14.343 06:47:24 -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:22:14.343 [2024-11-28 06:47:25.043453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:14.343 [2024-11-28 06:47:25.043491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:22:14.343 [2024-11-28 06:47:25.043501] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:22:14.343 [2024-11-28 06:47:25.043507] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:14.343 [2024-11-28 06:47:25.043526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:14.343 [2024-11-28 06:47:25.043532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:22:14.343 [2024-11-28 06:47:25.043538] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:14.343 [2024-11-28 06:47:25.043544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:14.343 [2024-11-28 06:47:25.043559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:14.343 [2024-11-28 06:47:25.043565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:22:14.343 [2024-11-28 06:47:25.043571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:14.343 [2024-11-28 06:47:25.043576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:14.343 [2024-11-28 06:47:25.043622] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.161 ms, result 0 00:22:14.343 true 00:22:14.343 06:47:25 -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:22:14.343 06:47:25 -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:22:14.343 06:47:25 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:14.602 06:47:25 -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:22:14.602 06:47:25 -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:22:14.602 06:47:25 -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:22:14.861 [2024-11-28 06:47:25.431784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:14.861 [2024-11-28 06:47:25.431818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:22:14.861 [2024-11-28 06:47:25.431827] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:22:14.861 [2024-11-28 06:47:25.431833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:14.861 [2024-11-28 06:47:25.431850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:14.861 [2024-11-28 06:47:25.431856] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:22:14.861 [2024-11-28 06:47:25.431862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:14.861 [2024-11-28 06:47:25.431867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:14.861 [2024-11-28 06:47:25.431882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:14.861 [2024-11-28 06:47:25.431893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:22:14.861 [2024-11-28 06:47:25.431899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:14.861 [2024-11-28 06:47:25.431904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:14.861 [2024-11-28 06:47:25.431949] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.157 ms, result 0 00:22:14.861 true 00:22:14.861 06:47:25 -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:14.861 { 00:22:14.861 "name": "ftl", 00:22:14.861 "properties": [ 00:22:14.861 { 00:22:14.861 "name": "superblock_version", 00:22:14.861 "value": 5, 00:22:14.861 "read-only": true 00:22:14.861 }, 00:22:14.861 { 00:22:14.861 "name": "base_device", 00:22:14.861 "bands": [ 00:22:14.861 { 00:22:14.861 "id": 0, 00:22:14.861 "state": "FREE", 00:22:14.861 "validity": 0.0 00:22:14.861 }, 00:22:14.861 { 00:22:14.861 "id": 1, 00:22:14.861 "state": "FREE", 00:22:14.861 "validity": 0.0 00:22:14.861 }, 00:22:14.861 { 00:22:14.861 "id": 2, 00:22:14.861 "state": "FREE", 00:22:14.861 "validity": 0.0 00:22:14.861 }, 00:22:14.861 { 00:22:14.861 "id": 3, 00:22:14.861 "state": "FREE", 00:22:14.861 "validity": 0.0 00:22:14.861 }, 00:22:14.861 { 00:22:14.861 "id": 4, 00:22:14.861 "state": "FREE", 00:22:14.861 "validity": 0.0 00:22:14.861 }, 00:22:14.861 { 00:22:14.861 "id": 5, 00:22:14.861 "state": "FREE", 00:22:14.861 "validity": 0.0 00:22:14.861 }, 00:22:14.861 { 00:22:14.861 "id": 6, 00:22:14.861 "state": "FREE", 00:22:14.861 "validity": 0.0 00:22:14.861 }, 00:22:14.861 { 00:22:14.861 "id": 7, 00:22:14.861 "state": "FREE", 00:22:14.861 "validity": 0.0 00:22:14.861 }, 00:22:14.861 { 00:22:14.861 "id": 8, 00:22:14.861 "state": "FREE", 00:22:14.861 "validity": 0.0 00:22:14.861 }, 00:22:14.861 { 00:22:14.861 "id": 9, 00:22:14.861 "state": "FREE", 00:22:14.861 "validity": 0.0 00:22:14.861 }, 00:22:14.861 { 00:22:14.861 "id": 10, 00:22:14.861 "state": "FREE", 00:22:14.861 "validity": 0.0 00:22:14.861 }, 00:22:14.861 { 00:22:14.861 "id": 11, 00:22:14.861 "state": "FREE", 00:22:14.861 "validity": 0.0 00:22:14.861 }, 00:22:14.861 { 00:22:14.861 "id": 12, 00:22:14.861 "state": "FREE", 00:22:14.861 "validity": 0.0 00:22:14.861 }, 00:22:14.861 { 00:22:14.861 "id": 13, 00:22:14.861 "state": "FREE", 00:22:14.861 "validity": 0.0 00:22:14.861 }, 00:22:14.861 { 00:22:14.861 "id": 14, 00:22:14.861 "state": "FREE", 00:22:14.861 "validity": 0.0 00:22:14.861 }, 00:22:14.861 { 00:22:14.861 "id": 15, 00:22:14.861 "state": "FREE", 00:22:14.861 "validity": 0.0 00:22:14.862 }, 00:22:14.862 { 00:22:14.862 "id": 16, 00:22:14.862 "state": "FREE", 00:22:14.862 "validity": 0.0 00:22:14.862 }, 00:22:14.862 { 00:22:14.862 "id": 17, 00:22:14.862 "state": "FREE", 00:22:14.862 "validity": 0.0 00:22:14.862 } 00:22:14.862 ], 00:22:14.862 "read-only": true 00:22:14.862 }, 00:22:14.862 { 00:22:14.862 "name": "cache_device", 00:22:14.862 "type": "bdev", 00:22:14.862 "chunks": [ 00:22:14.862 { 00:22:14.862 "id": 0, 00:22:14.862 "state": "CLOSED", 00:22:14.862 "utilization": 1.0 00:22:14.862 }, 00:22:14.862 { 00:22:14.862 "id": 1, 00:22:14.862 "state": "CLOSED", 00:22:14.862 "utilization": 1.0 00:22:14.862 }, 00:22:14.862 { 00:22:14.862 "id": 2, 00:22:14.862 "state": "OPEN", 00:22:14.862 "utilization": 0.001953125 00:22:14.862 }, 00:22:14.862 { 00:22:14.862 "id": 3, 00:22:14.862 "state": "OPEN", 00:22:14.862 "utilization": 0.0 00:22:14.862 } 00:22:14.862 ], 00:22:14.862 "read-only": true 00:22:14.862 }, 00:22:14.862 { 00:22:14.862 "name": "verbose_mode", 00:22:14.862 "value": true, 00:22:14.862 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:22:14.862 }, 00:22:14.862 { 00:22:14.862 "name": "prep_upgrade_on_shutdown", 00:22:14.862 "value": true, 00:22:14.862 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:22:14.862 } 00:22:14.862 ] 00:22:14.862 } 00:22:15.120 06:47:25 -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:22:15.120 06:47:25 -- ftl/common.sh@130 -- # [[ -n 86975 ]] 00:22:15.120 06:47:25 -- ftl/common.sh@131 -- # killprocess 86975 00:22:15.120 06:47:25 -- common/autotest_common.sh@936 -- # '[' -z 86975 ']' 00:22:15.120 06:47:25 -- common/autotest_common.sh@940 -- # kill -0 86975 00:22:15.120 06:47:25 -- common/autotest_common.sh@941 -- # uname 00:22:15.120 06:47:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:15.120 06:47:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86975 00:22:15.120 06:47:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:15.120 06:47:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:15.120 killing process with pid 86975 00:22:15.120 06:47:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86975' 00:22:15.120 06:47:25 -- common/autotest_common.sh@955 -- # kill 86975 00:22:15.120 06:47:25 -- common/autotest_common.sh@960 -- # wait 86975 00:22:15.120 [2024-11-28 06:47:25.754140] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:22:15.120 [2024-11-28 06:47:25.757027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:15.120 [2024-11-28 06:47:25.757143] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:22:15.120 [2024-11-28 06:47:25.757198] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:22:15.120 [2024-11-28 06:47:25.757211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:15.120 [2024-11-28 06:47:25.757233] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:22:15.120 [2024-11-28 06:47:25.757607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:15.120 [2024-11-28 06:47:25.757622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:22:15.120 [2024-11-28 06:47:25.757630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.364 ms 00:22:15.120 [2024-11-28 06:47:25.757637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:23.243 [2024-11-28 06:47:33.513211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:23.243 [2024-11-28 06:47:33.513269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:22:23.243 [2024-11-28 06:47:33.513282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7755.528 ms 00:22:23.243 [2024-11-28 06:47:33.513289] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:23.243 [2024-11-28 06:47:33.514414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:23.243 [2024-11-28 06:47:33.514431] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:22:23.243 [2024-11-28 06:47:33.514439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.111 ms 00:22:23.243 [2024-11-28 06:47:33.514449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:23.243 [2024-11-28 06:47:33.515310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:23.243 [2024-11-28 06:47:33.515328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:22:23.243 [2024-11-28 06:47:33.515336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.840 ms 00:22:23.243 [2024-11-28 06:47:33.515342] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:23.243 [2024-11-28 06:47:33.516897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:23.243 [2024-11-28 06:47:33.516927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:22:23.243 [2024-11-28 06:47:33.516934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.528 ms 00:22:23.243 [2024-11-28 06:47:33.516940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:23.243 [2024-11-28 06:47:33.518398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:23.243 [2024-11-28 06:47:33.518430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:22:23.243 [2024-11-28 06:47:33.518442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.434 ms 00:22:23.243 [2024-11-28 06:47:33.518448] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:23.243 [2024-11-28 06:47:33.518510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:23.243 [2024-11-28 06:47:33.518518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:22:23.243 [2024-11-28 06:47:33.518525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:22:23.243 [2024-11-28 06:47:33.518530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:23.243 [2024-11-28 06:47:33.519631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:23.243 [2024-11-28 06:47:33.519655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:22:23.243 [2024-11-28 06:47:33.519662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.089 ms 00:22:23.243 [2024-11-28 06:47:33.519667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:23.243 [2024-11-28 06:47:33.520675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:23.243 [2024-11-28 06:47:33.520700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:22:23.243 [2024-11-28 06:47:33.520722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.984 ms 00:22:23.243 [2024-11-28 06:47:33.520728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:23.243 [2024-11-28 06:47:33.521579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:23.243 [2024-11-28 06:47:33.521604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:22:23.243 [2024-11-28 06:47:33.521611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.827 ms 00:22:23.243 [2024-11-28 06:47:33.521617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:23.243 [2024-11-28 06:47:33.522560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:23.243 [2024-11-28 06:47:33.522587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:22:23.243 [2024-11-28 06:47:33.522594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.900 ms 00:22:23.243 [2024-11-28 06:47:33.522599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:23.243 [2024-11-28 06:47:33.522621] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:22:23.243 [2024-11-28 06:47:33.522632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:22:23.243 [2024-11-28 06:47:33.522644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:22:23.243 [2024-11-28 06:47:33.522650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:22:23.243 [2024-11-28 06:47:33.522656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:23.243 [2024-11-28 06:47:33.522663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:23.243 [2024-11-28 06:47:33.522668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:23.243 [2024-11-28 06:47:33.522674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:23.243 [2024-11-28 06:47:33.522680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:23.243 [2024-11-28 06:47:33.522686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:23.243 [2024-11-28 06:47:33.522692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:23.243 [2024-11-28 06:47:33.522698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:23.243 [2024-11-28 06:47:33.522719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:23.243 [2024-11-28 06:47:33.522725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:23.243 [2024-11-28 06:47:33.522731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:23.243 [2024-11-28 06:47:33.522737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:23.243 [2024-11-28 06:47:33.522743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:23.243 [2024-11-28 06:47:33.522749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:23.243 [2024-11-28 06:47:33.522755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:23.243 [2024-11-28 06:47:33.522763] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:22:23.243 [2024-11-28 06:47:33.522769] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: ae285c28-66e5-4575-8dfb-6dd67c3fe0fb 00:22:23.243 [2024-11-28 06:47:33.522775] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:22:23.243 [2024-11-28 06:47:33.522781] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:22:23.243 [2024-11-28 06:47:33.522786] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:22:23.243 [2024-11-28 06:47:33.522793] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:22:23.244 [2024-11-28 06:47:33.522799] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:22:23.244 [2024-11-28 06:47:33.522805] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:22:23.244 [2024-11-28 06:47:33.522811] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:22:23.244 [2024-11-28 06:47:33.522816] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:22:23.244 [2024-11-28 06:47:33.522822] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:22:23.244 [2024-11-28 06:47:33.522828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:23.244 [2024-11-28 06:47:33.522833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:22:23.244 [2024-11-28 06:47:33.522839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.207 ms 00:22:23.244 [2024-11-28 06:47:33.522847] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:23.244 [2024-11-28 06:47:33.524132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:23.244 [2024-11-28 06:47:33.524151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:22:23.244 [2024-11-28 06:47:33.524157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.266 ms 00:22:23.244 [2024-11-28 06:47:33.524164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:23.244 [2024-11-28 06:47:33.524209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:23.244 [2024-11-28 06:47:33.524215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:22:23.244 [2024-11-28 06:47:33.524225] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:22:23.244 [2024-11-28 06:47:33.524231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:23.244 [2024-11-28 06:47:33.528607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:23.244 [2024-11-28 06:47:33.528632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:22:23.244 [2024-11-28 06:47:33.528638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:23.244 [2024-11-28 06:47:33.528644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:23.244 [2024-11-28 06:47:33.528664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:23.244 [2024-11-28 06:47:33.528670] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:22:23.244 [2024-11-28 06:47:33.528678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:23.244 [2024-11-28 06:47:33.528684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:23.244 [2024-11-28 06:47:33.528745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:23.244 [2024-11-28 06:47:33.528753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:22:23.244 [2024-11-28 06:47:33.528759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:23.244 [2024-11-28 06:47:33.528765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:23.244 [2024-11-28 06:47:33.528777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:23.244 [2024-11-28 06:47:33.528784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:22:23.244 [2024-11-28 06:47:33.528790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:23.244 [2024-11-28 06:47:33.528798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:23.244 [2024-11-28 06:47:33.536665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:23.244 [2024-11-28 06:47:33.536695] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:22:23.244 [2024-11-28 06:47:33.536726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:23.244 [2024-11-28 06:47:33.536733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:23.244 [2024-11-28 06:47:33.539840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:23.244 [2024-11-28 06:47:33.539868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:22:23.244 [2024-11-28 06:47:33.539875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:23.244 [2024-11-28 06:47:33.539890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:23.244 [2024-11-28 06:47:33.539917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:23.244 [2024-11-28 06:47:33.539924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:22:23.244 [2024-11-28 06:47:33.539931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:23.244 [2024-11-28 06:47:33.539937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:23.244 [2024-11-28 06:47:33.539966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:23.244 [2024-11-28 06:47:33.539973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:22:23.244 [2024-11-28 06:47:33.539979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:23.244 [2024-11-28 06:47:33.539985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:23.244 [2024-11-28 06:47:33.540035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:23.244 [2024-11-28 06:47:33.540043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:22:23.244 [2024-11-28 06:47:33.540049] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:23.244 [2024-11-28 06:47:33.540055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:23.244 [2024-11-28 06:47:33.540076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:23.244 [2024-11-28 06:47:33.540083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:22:23.244 [2024-11-28 06:47:33.540089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:23.244 [2024-11-28 06:47:33.540095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:23.244 [2024-11-28 06:47:33.540125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:23.244 [2024-11-28 06:47:33.540133] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:22:23.244 [2024-11-28 06:47:33.540139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:23.244 [2024-11-28 06:47:33.540145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:23.244 [2024-11-28 06:47:33.540176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:23.244 [2024-11-28 06:47:33.540184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:22:23.244 [2024-11-28 06:47:33.540193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:23.244 [2024-11-28 06:47:33.540199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:23.244 [2024-11-28 06:47:33.540299] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7783.226 ms, result 0 00:22:26.524 06:47:37 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:22:26.524 06:47:37 -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:22:26.524 06:47:37 -- ftl/common.sh@81 -- # local base_bdev= 00:22:26.524 06:47:37 -- ftl/common.sh@82 -- # local cache_bdev= 00:22:26.524 06:47:37 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:22:26.524 06:47:37 -- ftl/common.sh@89 -- # spdk_tgt_pid=87480 00:22:26.524 06:47:37 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:22:26.524 06:47:37 -- ftl/common.sh@91 -- # waitforlisten 87480 00:22:26.524 06:47:37 -- common/autotest_common.sh@829 -- # '[' -z 87480 ']' 00:22:26.524 06:47:37 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:26.524 06:47:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.524 06:47:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:26.524 06:47:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.524 06:47:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:26.524 06:47:37 -- common/autotest_common.sh@10 -- # set +x 00:22:26.524 [2024-11-28 06:47:37.085087] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:26.524 [2024-11-28 06:47:37.085191] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87480 ] 00:22:26.524 [2024-11-28 06:47:37.217752] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.524 [2024-11-28 06:47:37.244608] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:26.524 [2024-11-28 06:47:37.244804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.782 [2024-11-28 06:47:37.471291] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:22:26.782 [2024-11-28 06:47:37.471470] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:22:27.042 [2024-11-28 06:47:37.603678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.042 [2024-11-28 06:47:37.603730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:22:27.042 [2024-11-28 06:47:37.603741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:22:27.042 [2024-11-28 06:47:37.603747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.042 [2024-11-28 06:47:37.603785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.042 [2024-11-28 06:47:37.603793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:22:27.042 [2024-11-28 06:47:37.603801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:22:27.042 [2024-11-28 06:47:37.603806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.042 [2024-11-28 06:47:37.603820] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:22:27.042 [2024-11-28 06:47:37.603992] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:22:27.042 [2024-11-28 06:47:37.604007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.042 [2024-11-28 06:47:37.604012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:22:27.042 [2024-11-28 06:47:37.604019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.188 ms 00:22:27.042 [2024-11-28 06:47:37.604024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.042 [2024-11-28 06:47:37.605097] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:22:27.042 [2024-11-28 06:47:37.606933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.042 [2024-11-28 06:47:37.606965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:22:27.042 [2024-11-28 06:47:37.606975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.838 ms 00:22:27.042 [2024-11-28 06:47:37.606980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.042 [2024-11-28 06:47:37.607024] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.042 [2024-11-28 06:47:37.607033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:22:27.042 [2024-11-28 06:47:37.607040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:22:27.042 [2024-11-28 06:47:37.607046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.042 [2024-11-28 06:47:37.611267] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.042 [2024-11-28 06:47:37.611292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:22:27.042 [2024-11-28 06:47:37.611303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.189 ms 00:22:27.042 [2024-11-28 06:47:37.611309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.042 [2024-11-28 06:47:37.611340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.042 [2024-11-28 06:47:37.611348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:22:27.042 [2024-11-28 06:47:37.611354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:22:27.042 [2024-11-28 06:47:37.611360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.042 [2024-11-28 06:47:37.611403] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.042 [2024-11-28 06:47:37.611410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:22:27.042 [2024-11-28 06:47:37.611418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:22:27.042 [2024-11-28 06:47:37.611425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.042 [2024-11-28 06:47:37.611443] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:22:27.042 [2024-11-28 06:47:37.612614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.042 [2024-11-28 06:47:37.612636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:22:27.042 [2024-11-28 06:47:37.612643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.177 ms 00:22:27.042 [2024-11-28 06:47:37.612653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.042 [2024-11-28 06:47:37.612677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.042 [2024-11-28 06:47:37.612686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:22:27.042 [2024-11-28 06:47:37.612694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:22:27.042 [2024-11-28 06:47:37.612699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.042 [2024-11-28 06:47:37.612733] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:22:27.042 [2024-11-28 06:47:37.612748] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:22:27.042 [2024-11-28 06:47:37.612774] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:22:27.042 [2024-11-28 06:47:37.612786] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:22:27.042 [2024-11-28 06:47:37.612844] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:22:27.042 [2024-11-28 06:47:37.612852] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:22:27.042 [2024-11-28 06:47:37.612860] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:22:27.042 [2024-11-28 06:47:37.612867] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:22:27.042 [2024-11-28 06:47:37.612879] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:22:27.042 [2024-11-28 06:47:37.612885] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:22:27.042 [2024-11-28 06:47:37.612891] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:22:27.042 [2024-11-28 06:47:37.612896] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:22:27.042 [2024-11-28 06:47:37.612905] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:22:27.042 [2024-11-28 06:47:37.612911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.042 [2024-11-28 06:47:37.612918] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:22:27.042 [2024-11-28 06:47:37.612926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.182 ms 00:22:27.042 [2024-11-28 06:47:37.612933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.042 [2024-11-28 06:47:37.612980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.042 [2024-11-28 06:47:37.612987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:22:27.042 [2024-11-28 06:47:37.612995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:22:27.042 [2024-11-28 06:47:37.613000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.042 [2024-11-28 06:47:37.613060] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:22:27.042 [2024-11-28 06:47:37.613068] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:22:27.042 [2024-11-28 06:47:37.613074] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:22:27.043 [2024-11-28 06:47:37.613081] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:27.043 [2024-11-28 06:47:37.613087] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:22:27.043 [2024-11-28 06:47:37.613092] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:22:27.043 [2024-11-28 06:47:37.613098] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:22:27.043 [2024-11-28 06:47:37.613104] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:22:27.043 [2024-11-28 06:47:37.613110] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:22:27.043 [2024-11-28 06:47:37.613115] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:27.043 [2024-11-28 06:47:37.613121] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:22:27.043 [2024-11-28 06:47:37.613128] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:22:27.043 [2024-11-28 06:47:37.613133] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:27.043 [2024-11-28 06:47:37.613138] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:22:27.043 [2024-11-28 06:47:37.613143] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:22:27.043 [2024-11-28 06:47:37.613148] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:27.043 [2024-11-28 06:47:37.613153] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:22:27.043 [2024-11-28 06:47:37.613158] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:22:27.043 [2024-11-28 06:47:37.613162] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:27.043 [2024-11-28 06:47:37.613167] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:22:27.043 [2024-11-28 06:47:37.613172] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:22:27.043 [2024-11-28 06:47:37.613177] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:22:27.043 [2024-11-28 06:47:37.613183] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:22:27.043 [2024-11-28 06:47:37.613188] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:22:27.043 [2024-11-28 06:47:37.613192] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:22:27.043 [2024-11-28 06:47:37.613197] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:22:27.043 [2024-11-28 06:47:37.613202] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:22:27.043 [2024-11-28 06:47:37.613208] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:22:27.043 [2024-11-28 06:47:37.613212] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:22:27.043 [2024-11-28 06:47:37.613217] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:22:27.043 [2024-11-28 06:47:37.613221] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:22:27.043 [2024-11-28 06:47:37.613226] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:22:27.043 [2024-11-28 06:47:37.613231] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:22:27.043 [2024-11-28 06:47:37.613236] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:22:27.043 [2024-11-28 06:47:37.613240] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:22:27.043 [2024-11-28 06:47:37.613245] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:22:27.043 [2024-11-28 06:47:37.613250] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:27.043 [2024-11-28 06:47:37.613255] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:22:27.043 [2024-11-28 06:47:37.613260] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:22:27.043 [2024-11-28 06:47:37.613266] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:27.043 [2024-11-28 06:47:37.613271] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:22:27.043 [2024-11-28 06:47:37.613278] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:22:27.043 [2024-11-28 06:47:37.613284] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:22:27.043 [2024-11-28 06:47:37.613292] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:27.043 [2024-11-28 06:47:37.613299] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:22:27.043 [2024-11-28 06:47:37.613305] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:22:27.043 [2024-11-28 06:47:37.613311] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:22:27.043 [2024-11-28 06:47:37.613316] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:22:27.043 [2024-11-28 06:47:37.613322] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:22:27.043 [2024-11-28 06:47:37.613328] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:22:27.043 [2024-11-28 06:47:37.613335] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:22:27.043 [2024-11-28 06:47:37.613343] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:27.043 [2024-11-28 06:47:37.613350] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:22:27.043 [2024-11-28 06:47:37.613356] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:22:27.043 [2024-11-28 06:47:37.613362] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:22:27.043 [2024-11-28 06:47:37.613369] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:22:27.043 [2024-11-28 06:47:37.613375] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:22:27.043 [2024-11-28 06:47:37.613381] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:22:27.043 [2024-11-28 06:47:37.613387] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:22:27.043 [2024-11-28 06:47:37.613395] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:22:27.043 [2024-11-28 06:47:37.613401] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:22:27.043 [2024-11-28 06:47:37.613407] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:22:27.043 [2024-11-28 06:47:37.613412] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:22:27.043 [2024-11-28 06:47:37.613420] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:22:27.043 [2024-11-28 06:47:37.613426] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:22:27.043 [2024-11-28 06:47:37.613432] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:22:27.043 [2024-11-28 06:47:37.613439] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:27.043 [2024-11-28 06:47:37.613448] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:27.043 [2024-11-28 06:47:37.613454] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:22:27.043 [2024-11-28 06:47:37.613461] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:22:27.043 [2024-11-28 06:47:37.613466] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:22:27.043 [2024-11-28 06:47:37.613473] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.043 [2024-11-28 06:47:37.613480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:22:27.043 [2024-11-28 06:47:37.613488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.447 ms 00:22:27.043 [2024-11-28 06:47:37.613494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.043 [2024-11-28 06:47:37.618521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.043 [2024-11-28 06:47:37.618628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:22:27.043 [2024-11-28 06:47:37.618640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.995 ms 00:22:27.043 [2024-11-28 06:47:37.618646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.043 [2024-11-28 06:47:37.618675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.043 [2024-11-28 06:47:37.618681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:22:27.043 [2024-11-28 06:47:37.618688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:22:27.043 [2024-11-28 06:47:37.618700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.043 [2024-11-28 06:47:37.626266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.043 [2024-11-28 06:47:37.626290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:22:27.043 [2024-11-28 06:47:37.626298] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.528 ms 00:22:27.043 [2024-11-28 06:47:37.626304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.044 [2024-11-28 06:47:37.626327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.044 [2024-11-28 06:47:37.626333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:22:27.044 [2024-11-28 06:47:37.626341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:22:27.044 [2024-11-28 06:47:37.626347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.044 [2024-11-28 06:47:37.626641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.044 [2024-11-28 06:47:37.626654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:22:27.044 [2024-11-28 06:47:37.626661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.257 ms 00:22:27.044 [2024-11-28 06:47:37.626672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.044 [2024-11-28 06:47:37.626714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.044 [2024-11-28 06:47:37.626723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:22:27.044 [2024-11-28 06:47:37.626729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:22:27.044 [2024-11-28 06:47:37.626737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.044 [2024-11-28 06:47:37.631484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.044 [2024-11-28 06:47:37.631507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:22:27.044 [2024-11-28 06:47:37.631514] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.731 ms 00:22:27.044 [2024-11-28 06:47:37.631520] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.044 [2024-11-28 06:47:37.633582] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:22:27.044 [2024-11-28 06:47:37.633712] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:22:27.044 [2024-11-28 06:47:37.633786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.044 [2024-11-28 06:47:37.633803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:22:27.044 [2024-11-28 06:47:37.633818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.199 ms 00:22:27.044 [2024-11-28 06:47:37.633833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.044 [2024-11-28 06:47:37.637135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.044 [2024-11-28 06:47:37.637167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:22:27.044 [2024-11-28 06:47:37.637176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.266 ms 00:22:27.044 [2024-11-28 06:47:37.637183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.044 [2024-11-28 06:47:37.638188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.044 [2024-11-28 06:47:37.638215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:22:27.044 [2024-11-28 06:47:37.638223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.978 ms 00:22:27.044 [2024-11-28 06:47:37.638229] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.044 [2024-11-28 06:47:37.639170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.044 [2024-11-28 06:47:37.639198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:22:27.044 [2024-11-28 06:47:37.639205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.914 ms 00:22:27.044 [2024-11-28 06:47:37.639210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.044 [2024-11-28 06:47:37.639368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.044 [2024-11-28 06:47:37.639382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:22:27.044 [2024-11-28 06:47:37.639389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.109 ms 00:22:27.044 [2024-11-28 06:47:37.639398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.044 [2024-11-28 06:47:37.654163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.044 [2024-11-28 06:47:37.654197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:22:27.044 [2024-11-28 06:47:37.654208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.748 ms 00:22:27.044 [2024-11-28 06:47:37.654213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.044 [2024-11-28 06:47:37.659906] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:22:27.044 [2024-11-28 06:47:37.660653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.044 [2024-11-28 06:47:37.660677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:22:27.044 [2024-11-28 06:47:37.660690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.400 ms 00:22:27.044 [2024-11-28 06:47:37.660698] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.044 [2024-11-28 06:47:37.660766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.044 [2024-11-28 06:47:37.660779] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:22:27.044 [2024-11-28 06:47:37.660786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:27.044 [2024-11-28 06:47:37.660794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.044 [2024-11-28 06:47:37.660828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.044 [2024-11-28 06:47:37.660838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:22:27.044 [2024-11-28 06:47:37.660848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:22:27.044 [2024-11-28 06:47:37.660855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.044 [2024-11-28 06:47:37.661879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.044 [2024-11-28 06:47:37.661902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:22:27.044 [2024-11-28 06:47:37.661909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.008 ms 00:22:27.044 [2024-11-28 06:47:37.661914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.044 [2024-11-28 06:47:37.661938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.044 [2024-11-28 06:47:37.661948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:22:27.044 [2024-11-28 06:47:37.661954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:27.044 [2024-11-28 06:47:37.661964] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.044 [2024-11-28 06:47:37.661992] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:22:27.044 [2024-11-28 06:47:37.662000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.044 [2024-11-28 06:47:37.662005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:22:27.044 [2024-11-28 06:47:37.662010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:22:27.044 [2024-11-28 06:47:37.662016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.044 [2024-11-28 06:47:37.664599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.044 [2024-11-28 06:47:37.664626] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:22:27.044 [2024-11-28 06:47:37.664634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.568 ms 00:22:27.044 [2024-11-28 06:47:37.664647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.044 [2024-11-28 06:47:37.664721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.044 [2024-11-28 06:47:37.664731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:22:27.044 [2024-11-28 06:47:37.664737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:22:27.044 [2024-11-28 06:47:37.664743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.044 [2024-11-28 06:47:37.665474] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 61.505 ms, result 0 00:22:27.044 [2024-11-28 06:47:37.681120] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:27.044 [2024-11-28 06:47:37.697122] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:22:27.044 [2024-11-28 06:47:37.705222] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:27.302 06:47:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:27.302 06:47:37 -- common/autotest_common.sh@862 -- # return 0 00:22:27.302 06:47:37 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:22:27.302 06:47:37 -- ftl/common.sh@95 -- # return 0 00:22:27.302 06:47:37 -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:22:27.302 [2024-11-28 06:47:38.062081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.302 [2024-11-28 06:47:38.062123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:22:27.302 [2024-11-28 06:47:38.062136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:22:27.302 [2024-11-28 06:47:38.062142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.302 [2024-11-28 06:47:38.062160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.302 [2024-11-28 06:47:38.062166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:22:27.302 [2024-11-28 06:47:38.062175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:22:27.302 [2024-11-28 06:47:38.062181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.302 [2024-11-28 06:47:38.062198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:27.302 [2024-11-28 06:47:38.062204] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:22:27.302 [2024-11-28 06:47:38.062212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:27.302 [2024-11-28 06:47:38.062217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:27.302 [2024-11-28 06:47:38.062265] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.179 ms, result 0 00:22:27.302 true 00:22:27.560 06:47:38 -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:27.560 { 00:22:27.560 "name": "ftl", 00:22:27.560 "properties": [ 00:22:27.560 { 00:22:27.560 "name": "superblock_version", 00:22:27.560 "value": 5, 00:22:27.560 "read-only": true 00:22:27.560 }, 00:22:27.560 { 00:22:27.560 "name": "base_device", 00:22:27.560 "bands": [ 00:22:27.560 { 00:22:27.560 "id": 0, 00:22:27.560 "state": "CLOSED", 00:22:27.560 "validity": 1.0 00:22:27.560 }, 00:22:27.560 { 00:22:27.560 "id": 1, 00:22:27.560 "state": "CLOSED", 00:22:27.560 "validity": 1.0 00:22:27.560 }, 00:22:27.560 { 00:22:27.560 "id": 2, 00:22:27.560 "state": "CLOSED", 00:22:27.560 "validity": 0.007843137254901933 00:22:27.560 }, 00:22:27.560 { 00:22:27.560 "id": 3, 00:22:27.560 "state": "FREE", 00:22:27.560 "validity": 0.0 00:22:27.560 }, 00:22:27.560 { 00:22:27.560 "id": 4, 00:22:27.560 "state": "FREE", 00:22:27.560 "validity": 0.0 00:22:27.560 }, 00:22:27.560 { 00:22:27.560 "id": 5, 00:22:27.560 "state": "FREE", 00:22:27.560 "validity": 0.0 00:22:27.560 }, 00:22:27.560 { 00:22:27.560 "id": 6, 00:22:27.560 "state": "FREE", 00:22:27.560 "validity": 0.0 00:22:27.560 }, 00:22:27.560 { 00:22:27.560 "id": 7, 00:22:27.560 "state": "FREE", 00:22:27.560 "validity": 0.0 00:22:27.560 }, 00:22:27.560 { 00:22:27.560 "id": 8, 00:22:27.560 "state": "FREE", 00:22:27.560 "validity": 0.0 00:22:27.560 }, 00:22:27.560 { 00:22:27.560 "id": 9, 00:22:27.560 "state": "FREE", 00:22:27.560 "validity": 0.0 00:22:27.560 }, 00:22:27.560 { 00:22:27.560 "id": 10, 00:22:27.560 "state": "FREE", 00:22:27.560 "validity": 0.0 00:22:27.560 }, 00:22:27.560 { 00:22:27.560 "id": 11, 00:22:27.560 "state": "FREE", 00:22:27.560 "validity": 0.0 00:22:27.560 }, 00:22:27.560 { 00:22:27.560 "id": 12, 00:22:27.560 "state": "FREE", 00:22:27.560 "validity": 0.0 00:22:27.560 }, 00:22:27.560 { 00:22:27.560 "id": 13, 00:22:27.560 "state": "FREE", 00:22:27.560 "validity": 0.0 00:22:27.560 }, 00:22:27.560 { 00:22:27.560 "id": 14, 00:22:27.560 "state": "FREE", 00:22:27.560 "validity": 0.0 00:22:27.560 }, 00:22:27.560 { 00:22:27.560 "id": 15, 00:22:27.560 "state": "FREE", 00:22:27.560 "validity": 0.0 00:22:27.560 }, 00:22:27.560 { 00:22:27.560 "id": 16, 00:22:27.560 "state": "FREE", 00:22:27.560 "validity": 0.0 00:22:27.560 }, 00:22:27.560 { 00:22:27.560 "id": 17, 00:22:27.560 "state": "FREE", 00:22:27.560 "validity": 0.0 00:22:27.560 } 00:22:27.560 ], 00:22:27.560 "read-only": true 00:22:27.560 }, 00:22:27.560 { 00:22:27.560 "name": "cache_device", 00:22:27.560 "type": "bdev", 00:22:27.560 "chunks": [ 00:22:27.560 { 00:22:27.560 "id": 0, 00:22:27.560 "state": "OPEN", 00:22:27.560 "utilization": 0.0 00:22:27.560 }, 00:22:27.560 { 00:22:27.560 "id": 1, 00:22:27.560 "state": "OPEN", 00:22:27.560 "utilization": 0.0 00:22:27.560 }, 00:22:27.560 { 00:22:27.560 "id": 2, 00:22:27.560 "state": "FREE", 00:22:27.560 "utilization": 0.0 00:22:27.560 }, 00:22:27.560 { 00:22:27.560 "id": 3, 00:22:27.560 "state": "FREE", 00:22:27.560 "utilization": 0.0 00:22:27.560 } 00:22:27.560 ], 00:22:27.560 "read-only": true 00:22:27.560 }, 00:22:27.560 { 00:22:27.560 "name": "verbose_mode", 00:22:27.560 "value": true, 00:22:27.560 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:22:27.560 }, 00:22:27.560 { 00:22:27.560 "name": "prep_upgrade_on_shutdown", 00:22:27.560 "value": false, 00:22:27.560 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:22:27.560 } 00:22:27.560 ] 00:22:27.560 } 00:22:27.560 06:47:38 -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:22:27.560 06:47:38 -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:22:27.560 06:47:38 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:27.817 06:47:38 -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:22:27.817 06:47:38 -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:22:27.817 06:47:38 -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:22:27.817 06:47:38 -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:22:27.818 06:47:38 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:28.076 Validate MD5 checksum, iteration 1 00:22:28.076 06:47:38 -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:22:28.076 06:47:38 -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:22:28.076 06:47:38 -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:22:28.076 06:47:38 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:22:28.076 06:47:38 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:22:28.076 06:47:38 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:22:28.076 06:47:38 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:22:28.076 06:47:38 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:28.076 06:47:38 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:28.076 06:47:38 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:28.076 06:47:38 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:28.076 06:47:38 -- ftl/common.sh@154 -- # return 0 00:22:28.076 06:47:38 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:28.076 [2024-11-28 06:47:38.717434] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:28.076 [2024-11-28 06:47:38.717538] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87510 ] 00:22:28.334 [2024-11-28 06:47:38.852775] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.334 [2024-11-28 06:47:38.882358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.710  [2024-11-28T06:47:40.743Z] Copying: 728/1024 [MB] (728 MBps) [2024-11-28T06:47:41.310Z] Copying: 1024/1024 [MB] (average 703 MBps) 00:22:30.540 00:22:30.540 06:47:41 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:22:30.540 06:47:41 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:33.073 06:47:43 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:22:33.073 06:47:43 -- ftl/upgrade_shutdown.sh@103 -- # sum=acd28112a895f2b8aab58130863ef640 00:22:33.073 06:47:43 -- ftl/upgrade_shutdown.sh@105 -- # [[ acd28112a895f2b8aab58130863ef640 != \a\c\d\2\8\1\1\2\a\8\9\5\f\2\b\8\a\a\b\5\8\1\3\0\8\6\3\e\f\6\4\0 ]] 00:22:33.073 06:47:43 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:22:33.073 06:47:43 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:22:33.073 06:47:43 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:22:33.073 Validate MD5 checksum, iteration 2 00:22:33.073 06:47:43 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:22:33.073 06:47:43 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:33.073 06:47:43 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:33.073 06:47:43 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:33.073 06:47:43 -- ftl/common.sh@154 -- # return 0 00:22:33.073 06:47:43 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:22:33.073 [2024-11-28 06:47:43.403025] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:33.073 [2024-11-28 06:47:43.403865] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87567 ] 00:22:33.073 [2024-11-28 06:47:43.538890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.073 [2024-11-28 06:47:43.566054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.451  [2024-11-28T06:47:45.479Z] Copying: 748/1024 [MB] (748 MBps) [2024-11-28T06:47:46.045Z] Copying: 1024/1024 [MB] (average 735 MBps) 00:22:35.275 00:22:35.275 06:47:45 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:22:35.275 06:47:45 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:37.180 06:47:47 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:22:37.180 06:47:47 -- ftl/upgrade_shutdown.sh@103 -- # sum=3ee61297e1defd23e699ec8aa787e3ad 00:22:37.180 06:47:47 -- ftl/upgrade_shutdown.sh@105 -- # [[ 3ee61297e1defd23e699ec8aa787e3ad != \3\e\e\6\1\2\9\7\e\1\d\e\f\d\2\3\e\6\9\9\e\c\8\a\a\7\8\7\e\3\a\d ]] 00:22:37.180 06:47:47 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:22:37.180 06:47:47 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:22:37.180 06:47:47 -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:22:37.180 06:47:47 -- ftl/common.sh@137 -- # [[ -n 87480 ]] 00:22:37.180 06:47:47 -- ftl/common.sh@138 -- # kill -9 87480 00:22:37.180 06:47:47 -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:22:37.180 06:47:47 -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:22:37.180 06:47:47 -- ftl/common.sh@81 -- # local base_bdev= 00:22:37.180 06:47:47 -- ftl/common.sh@82 -- # local cache_bdev= 00:22:37.180 06:47:47 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:22:37.180 06:47:47 -- ftl/common.sh@89 -- # spdk_tgt_pid=87617 00:22:37.180 06:47:47 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:22:37.180 06:47:47 -- ftl/common.sh@91 -- # waitforlisten 87617 00:22:37.180 06:47:47 -- common/autotest_common.sh@829 -- # '[' -z 87617 ']' 00:22:37.180 06:47:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.180 06:47:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:37.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.180 06:47:47 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:37.180 06:47:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.180 06:47:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:37.180 06:47:47 -- common/autotest_common.sh@10 -- # set +x 00:22:37.180 [2024-11-28 06:47:47.829096] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:37.180 [2024-11-28 06:47:47.829357] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87617 ] 00:22:37.180 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 828: 87480 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:22:37.439 [2024-11-28 06:47:47.961815] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.439 [2024-11-28 06:47:47.990118] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:37.439 [2024-11-28 06:47:47.990281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.700 [2024-11-28 06:47:48.213316] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:22:37.700 [2024-11-28 06:47:48.213369] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:22:37.700 [2024-11-28 06:47:48.349800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.700 [2024-11-28 06:47:48.349947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:22:37.700 [2024-11-28 06:47:48.349963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:37.700 [2024-11-28 06:47:48.349974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.700 [2024-11-28 06:47:48.350017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.700 [2024-11-28 06:47:48.350024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:22:37.700 [2024-11-28 06:47:48.350032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:22:37.700 [2024-11-28 06:47:48.350038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.700 [2024-11-28 06:47:48.350058] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:22:37.700 [2024-11-28 06:47:48.350226] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:22:37.700 [2024-11-28 06:47:48.350239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.700 [2024-11-28 06:47:48.350244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:22:37.700 [2024-11-28 06:47:48.350254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.187 ms 00:22:37.700 [2024-11-28 06:47:48.350259] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.700 [2024-11-28 06:47:48.350439] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:22:37.700 [2024-11-28 06:47:48.353380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.700 [2024-11-28 06:47:48.353492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:22:37.700 [2024-11-28 06:47:48.353512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.941 ms 00:22:37.700 [2024-11-28 06:47:48.353518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.700 [2024-11-28 06:47:48.354271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.700 [2024-11-28 06:47:48.354293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:22:37.700 [2024-11-28 06:47:48.354300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:22:37.700 [2024-11-28 06:47:48.354306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.700 [2024-11-28 06:47:48.354519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.700 [2024-11-28 06:47:48.354528] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:22:37.700 [2024-11-28 06:47:48.354534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.164 ms 00:22:37.700 [2024-11-28 06:47:48.354540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.700 [2024-11-28 06:47:48.354568] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.700 [2024-11-28 06:47:48.354578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:22:37.700 [2024-11-28 06:47:48.354585] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:22:37.700 [2024-11-28 06:47:48.354590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.700 [2024-11-28 06:47:48.354608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.700 [2024-11-28 06:47:48.354614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:22:37.700 [2024-11-28 06:47:48.354622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:37.700 [2024-11-28 06:47:48.354628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.700 [2024-11-28 06:47:48.354645] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:22:37.700 [2024-11-28 06:47:48.355328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.700 [2024-11-28 06:47:48.355340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:22:37.700 [2024-11-28 06:47:48.355346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.688 ms 00:22:37.700 [2024-11-28 06:47:48.355352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.700 [2024-11-28 06:47:48.355375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.700 [2024-11-28 06:47:48.355381] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:22:37.700 [2024-11-28 06:47:48.355391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:22:37.700 [2024-11-28 06:47:48.355399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.700 [2024-11-28 06:47:48.355415] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:22:37.700 [2024-11-28 06:47:48.355429] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:22:37.700 [2024-11-28 06:47:48.355452] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:22:37.700 [2024-11-28 06:47:48.355466] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:22:37.700 [2024-11-28 06:47:48.355522] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:22:37.700 [2024-11-28 06:47:48.355533] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:22:37.700 [2024-11-28 06:47:48.355546] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:22:37.700 [2024-11-28 06:47:48.355553] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:22:37.700 [2024-11-28 06:47:48.355560] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:22:37.700 [2024-11-28 06:47:48.355566] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:22:37.700 [2024-11-28 06:47:48.355572] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:22:37.700 [2024-11-28 06:47:48.355577] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:22:37.700 [2024-11-28 06:47:48.355583] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:22:37.700 [2024-11-28 06:47:48.355589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.700 [2024-11-28 06:47:48.355594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:22:37.700 [2024-11-28 06:47:48.355603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.175 ms 00:22:37.700 [2024-11-28 06:47:48.355610] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.700 [2024-11-28 06:47:48.355656] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.700 [2024-11-28 06:47:48.355663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:22:37.700 [2024-11-28 06:47:48.355669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:22:37.700 [2024-11-28 06:47:48.355674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.700 [2024-11-28 06:47:48.355746] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:22:37.700 [2024-11-28 06:47:48.355754] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:22:37.700 [2024-11-28 06:47:48.355759] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:22:37.700 [2024-11-28 06:47:48.355766] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:37.700 [2024-11-28 06:47:48.355775] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:22:37.700 [2024-11-28 06:47:48.355780] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:22:37.700 [2024-11-28 06:47:48.355785] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:22:37.700 [2024-11-28 06:47:48.355791] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:22:37.700 [2024-11-28 06:47:48.355796] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:22:37.700 [2024-11-28 06:47:48.355803] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:37.700 [2024-11-28 06:47:48.355807] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:22:37.700 [2024-11-28 06:47:48.355813] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:22:37.700 [2024-11-28 06:47:48.355817] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:37.700 [2024-11-28 06:47:48.355822] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:22:37.700 [2024-11-28 06:47:48.355827] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:22:37.700 [2024-11-28 06:47:48.355832] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:37.700 [2024-11-28 06:47:48.355836] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:22:37.700 [2024-11-28 06:47:48.355841] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:22:37.700 [2024-11-28 06:47:48.355846] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:37.700 [2024-11-28 06:47:48.355851] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:22:37.700 [2024-11-28 06:47:48.355856] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:22:37.700 [2024-11-28 06:47:48.355862] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:22:37.700 [2024-11-28 06:47:48.355867] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:22:37.700 [2024-11-28 06:47:48.355871] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:22:37.700 [2024-11-28 06:47:48.355876] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:22:37.700 [2024-11-28 06:47:48.355883] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:22:37.701 [2024-11-28 06:47:48.355889] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:22:37.701 [2024-11-28 06:47:48.355893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:22:37.701 [2024-11-28 06:47:48.355898] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:22:37.701 [2024-11-28 06:47:48.355903] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:22:37.701 [2024-11-28 06:47:48.355907] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:22:37.701 [2024-11-28 06:47:48.355912] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:22:37.701 [2024-11-28 06:47:48.355916] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:22:37.701 [2024-11-28 06:47:48.355922] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:22:37.701 [2024-11-28 06:47:48.355927] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:22:37.701 [2024-11-28 06:47:48.355932] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:22:37.701 [2024-11-28 06:47:48.355936] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:37.701 [2024-11-28 06:47:48.355941] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:22:37.701 [2024-11-28 06:47:48.355945] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:22:37.701 [2024-11-28 06:47:48.355951] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:37.701 [2024-11-28 06:47:48.355956] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:22:37.701 [2024-11-28 06:47:48.355963] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:22:37.701 [2024-11-28 06:47:48.355968] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:22:37.701 [2024-11-28 06:47:48.355973] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:37.701 [2024-11-28 06:47:48.355983] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:22:37.701 [2024-11-28 06:47:48.355989] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:22:37.701 [2024-11-28 06:47:48.355994] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:22:37.701 [2024-11-28 06:47:48.356001] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:22:37.701 [2024-11-28 06:47:48.356007] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:22:37.701 [2024-11-28 06:47:48.356012] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:22:37.701 [2024-11-28 06:47:48.356019] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:22:37.701 [2024-11-28 06:47:48.356026] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:37.701 [2024-11-28 06:47:48.356033] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:22:37.701 [2024-11-28 06:47:48.356039] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:22:37.701 [2024-11-28 06:47:48.356045] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:22:37.701 [2024-11-28 06:47:48.356051] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:22:37.701 [2024-11-28 06:47:48.356057] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:22:37.701 [2024-11-28 06:47:48.356066] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:22:37.701 [2024-11-28 06:47:48.356073] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:22:37.701 [2024-11-28 06:47:48.356079] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:22:37.701 [2024-11-28 06:47:48.356085] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:22:37.701 [2024-11-28 06:47:48.356091] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:22:37.701 [2024-11-28 06:47:48.356097] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:22:37.701 [2024-11-28 06:47:48.356103] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:22:37.701 [2024-11-28 06:47:48.356109] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:22:37.701 [2024-11-28 06:47:48.356115] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:22:37.701 [2024-11-28 06:47:48.356122] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:37.701 [2024-11-28 06:47:48.356131] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:37.701 [2024-11-28 06:47:48.356137] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:22:37.701 [2024-11-28 06:47:48.356143] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:22:37.701 [2024-11-28 06:47:48.356149] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:22:37.701 [2024-11-28 06:47:48.356156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.701 [2024-11-28 06:47:48.356162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:22:37.701 [2024-11-28 06:47:48.356172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.456 ms 00:22:37.701 [2024-11-28 06:47:48.356178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.701 [2024-11-28 06:47:48.360148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.701 [2024-11-28 06:47:48.360255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:22:37.701 [2024-11-28 06:47:48.360267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.939 ms 00:22:37.701 [2024-11-28 06:47:48.360273] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.701 [2024-11-28 06:47:48.360300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.701 [2024-11-28 06:47:48.360314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:22:37.701 [2024-11-28 06:47:48.360320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:22:37.701 [2024-11-28 06:47:48.360328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.701 [2024-11-28 06:47:48.367926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.701 [2024-11-28 06:47:48.367953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:22:37.701 [2024-11-28 06:47:48.367960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.564 ms 00:22:37.701 [2024-11-28 06:47:48.367966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.701 [2024-11-28 06:47:48.367985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.701 [2024-11-28 06:47:48.367991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:22:37.701 [2024-11-28 06:47:48.368000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:37.701 [2024-11-28 06:47:48.368006] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.701 [2024-11-28 06:47:48.368065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.701 [2024-11-28 06:47:48.368077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:22:37.701 [2024-11-28 06:47:48.368083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:22:37.701 [2024-11-28 06:47:48.368089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.701 [2024-11-28 06:47:48.368117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.701 [2024-11-28 06:47:48.368124] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:22:37.701 [2024-11-28 06:47:48.368130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:22:37.701 [2024-11-28 06:47:48.368138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.701 [2024-11-28 06:47:48.372882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.701 [2024-11-28 06:47:48.372906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:22:37.701 [2024-11-28 06:47:48.372914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.728 ms 00:22:37.701 [2024-11-28 06:47:48.372919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.701 [2024-11-28 06:47:48.372985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.701 [2024-11-28 06:47:48.372997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:22:37.701 [2024-11-28 06:47:48.373004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:37.701 [2024-11-28 06:47:48.373012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.701 [2024-11-28 06:47:48.376134] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.701 [2024-11-28 06:47:48.376161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:22:37.701 [2024-11-28 06:47:48.376169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.107 ms 00:22:37.701 [2024-11-28 06:47:48.376175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.701 [2024-11-28 06:47:48.377032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.701 [2024-11-28 06:47:48.377056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:22:37.701 [2024-11-28 06:47:48.377068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.097 ms 00:22:37.701 [2024-11-28 06:47:48.377076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.701 [2024-11-28 06:47:48.392336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.701 [2024-11-28 06:47:48.392367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:22:37.701 [2024-11-28 06:47:48.392376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.241 ms 00:22:37.701 [2024-11-28 06:47:48.392382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.701 [2024-11-28 06:47:48.392446] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:22:37.701 [2024-11-28 06:47:48.392483] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:22:37.701 [2024-11-28 06:47:48.392513] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:22:37.701 [2024-11-28 06:47:48.392541] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:22:37.701 [2024-11-28 06:47:48.392547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.701 [2024-11-28 06:47:48.392553] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:22:37.701 [2024-11-28 06:47:48.392560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.132 ms 00:22:37.701 [2024-11-28 06:47:48.392568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.702 [2024-11-28 06:47:48.392604] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:22:37.702 [2024-11-28 06:47:48.392614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.702 [2024-11-28 06:47:48.392620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:22:37.702 [2024-11-28 06:47:48.392626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:22:37.702 [2024-11-28 06:47:48.392632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.702 [2024-11-28 06:47:48.394962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.702 [2024-11-28 06:47:48.394989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:22:37.702 [2024-11-28 06:47:48.394996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.314 ms 00:22:37.702 [2024-11-28 06:47:48.395005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.702 [2024-11-28 06:47:48.395500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.702 [2024-11-28 06:47:48.395551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:22:37.702 [2024-11-28 06:47:48.395558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:22:37.702 [2024-11-28 06:47:48.395564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.702 [2024-11-28 06:47:48.395584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.702 [2024-11-28 06:47:48.395592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover unmap map 00:22:37.702 [2024-11-28 06:47:48.395598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:37.702 [2024-11-28 06:47:48.395603] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.702 [2024-11-28 06:47:48.395751] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 8032, seq id 14 00:22:38.269 [2024-11-28 06:47:48.900296] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 8032, seq id 14 00:22:38.269 [2024-11-28 06:47:48.900473] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 270176, seq id 15 00:22:38.835 [2024-11-28 06:47:49.482188] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 270176, seq id 15 00:22:38.835 [2024-11-28 06:47:49.482289] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:38.835 [2024-11-28 06:47:49.482304] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:22:38.835 [2024-11-28 06:47:49.482315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:38.835 [2024-11-28 06:47:49.482323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:22:38.835 [2024-11-28 06:47:49.482336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1086.694 ms 00:22:38.835 [2024-11-28 06:47:49.482343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:38.835 [2024-11-28 06:47:49.482375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:38.835 [2024-11-28 06:47:49.482388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:22:38.835 [2024-11-28 06:47:49.482396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:22:38.835 [2024-11-28 06:47:49.482411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:38.835 [2024-11-28 06:47:49.490158] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:22:38.835 [2024-11-28 06:47:49.490263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:38.835 [2024-11-28 06:47:49.490279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:22:38.835 [2024-11-28 06:47:49.490289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.834 ms 00:22:38.835 [2024-11-28 06:47:49.490296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:38.835 [2024-11-28 06:47:49.490959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:38.835 [2024-11-28 06:47:49.490981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from SHM 00:22:38.835 [2024-11-28 06:47:49.490990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.599 ms 00:22:38.835 [2024-11-28 06:47:49.490998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:38.835 [2024-11-28 06:47:49.493254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:38.835 [2024-11-28 06:47:49.493391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:22:38.835 [2024-11-28 06:47:49.493406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.239 ms 00:22:38.835 [2024-11-28 06:47:49.493418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:38.835 [2024-11-28 06:47:49.497432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:38.835 [2024-11-28 06:47:49.497538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Complete unmap transaction 00:22:38.835 [2024-11-28 06:47:49.497593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.986 ms 00:22:38.835 [2024-11-28 06:47:49.497617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:38.835 [2024-11-28 06:47:49.497721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:38.836 [2024-11-28 06:47:49.497751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:22:38.836 [2024-11-28 06:47:49.497772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:22:38.836 [2024-11-28 06:47:49.497791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:38.836 [2024-11-28 06:47:49.499060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:38.836 [2024-11-28 06:47:49.499161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:22:38.836 [2024-11-28 06:47:49.499210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.193 ms 00:22:38.836 [2024-11-28 06:47:49.499232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:38.836 [2024-11-28 06:47:49.499275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:38.836 [2024-11-28 06:47:49.499296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:22:38.836 [2024-11-28 06:47:49.499316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:22:38.836 [2024-11-28 06:47:49.499334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:38.836 [2024-11-28 06:47:49.499396] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:22:38.836 [2024-11-28 06:47:49.499482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:38.836 [2024-11-28 06:47:49.499512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:22:38.836 [2024-11-28 06:47:49.499532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.088 ms 00:22:38.836 [2024-11-28 06:47:49.499551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:38.836 [2024-11-28 06:47:49.499613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:38.836 [2024-11-28 06:47:49.499636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:22:38.836 [2024-11-28 06:47:49.499657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:22:38.836 [2024-11-28 06:47:49.499675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:38.836 [2024-11-28 06:47:49.500520] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1150.312 ms, result 0 00:22:38.836 [2024-11-28 06:47:49.515490] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.836 [2024-11-28 06:47:49.531498] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:22:38.836 [2024-11-28 06:47:49.539617] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:39.434 Validate MD5 checksum, iteration 1 00:22:39.434 06:47:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:39.434 06:47:49 -- common/autotest_common.sh@862 -- # return 0 00:22:39.434 06:47:49 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:22:39.434 06:47:49 -- ftl/common.sh@95 -- # return 0 00:22:39.434 06:47:49 -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:22:39.434 06:47:49 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:22:39.434 06:47:49 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:22:39.434 06:47:49 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:22:39.434 06:47:49 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:22:39.434 06:47:49 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:39.434 06:47:49 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:39.434 06:47:49 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:39.434 06:47:49 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:39.434 06:47:49 -- ftl/common.sh@154 -- # return 0 00:22:39.434 06:47:49 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:39.434 [2024-11-28 06:47:50.000929] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:39.434 [2024-11-28 06:47:50.001193] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87647 ] 00:22:39.434 [2024-11-28 06:47:50.137027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.434 [2024-11-28 06:47:50.167536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.809  [2024-11-28T06:47:52.145Z] Copying: 742/1024 [MB] (742 MBps) [2024-11-28T06:47:52.712Z] Copying: 1024/1024 [MB] (average 740 MBps) 00:22:41.942 00:22:41.942 06:47:52 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:22:41.942 06:47:52 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:43.842 06:47:54 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:22:43.842 06:47:54 -- ftl/upgrade_shutdown.sh@103 -- # sum=acd28112a895f2b8aab58130863ef640 00:22:43.842 06:47:54 -- ftl/upgrade_shutdown.sh@105 -- # [[ acd28112a895f2b8aab58130863ef640 != \a\c\d\2\8\1\1\2\a\8\9\5\f\2\b\8\a\a\b\5\8\1\3\0\8\6\3\e\f\6\4\0 ]] 00:22:43.842 06:47:54 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:22:43.842 06:47:54 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:22:43.842 06:47:54 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:22:43.842 Validate MD5 checksum, iteration 2 00:22:43.842 06:47:54 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:22:43.842 06:47:54 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:43.842 06:47:54 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:43.842 06:47:54 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:43.842 06:47:54 -- ftl/common.sh@154 -- # return 0 00:22:43.842 06:47:54 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:22:43.842 [2024-11-28 06:47:54.612322] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:43.842 [2024-11-28 06:47:54.612407] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87704 ] 00:22:44.099 [2024-11-28 06:47:54.744836] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.099 [2024-11-28 06:47:54.773839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.475  [2024-11-28T06:47:56.814Z] Copying: 671/1024 [MB] (671 MBps) [2024-11-28T06:47:59.351Z] Copying: 1024/1024 [MB] (average 660 MBps) 00:22:48.581 00:22:48.581 06:47:58 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:22:48.581 06:47:58 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:50.483 06:48:00 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:22:50.483 06:48:00 -- ftl/upgrade_shutdown.sh@103 -- # sum=3ee61297e1defd23e699ec8aa787e3ad 00:22:50.483 06:48:00 -- ftl/upgrade_shutdown.sh@105 -- # [[ 3ee61297e1defd23e699ec8aa787e3ad != \3\e\e\6\1\2\9\7\e\1\d\e\f\d\2\3\e\6\9\9\e\c\8\a\a\7\8\7\e\3\a\d ]] 00:22:50.483 06:48:00 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:22:50.483 06:48:00 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:22:50.483 06:48:00 -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:22:50.483 06:48:00 -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:22:50.483 06:48:00 -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:22:50.483 06:48:00 -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:50.483 06:48:00 -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:22:50.484 06:48:00 -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:22:50.484 06:48:00 -- ftl/common.sh@193 -- # tcp_target_cleanup 00:22:50.484 06:48:00 -- ftl/common.sh@144 -- # tcp_target_shutdown 00:22:50.484 06:48:00 -- ftl/common.sh@130 -- # [[ -n 87617 ]] 00:22:50.484 06:48:00 -- ftl/common.sh@131 -- # killprocess 87617 00:22:50.484 06:48:00 -- common/autotest_common.sh@936 -- # '[' -z 87617 ']' 00:22:50.484 06:48:00 -- common/autotest_common.sh@940 -- # kill -0 87617 00:22:50.484 06:48:00 -- common/autotest_common.sh@941 -- # uname 00:22:50.484 06:48:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:50.484 06:48:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87617 00:22:50.484 killing process with pid 87617 00:22:50.484 06:48:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:50.484 06:48:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:50.484 06:48:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87617' 00:22:50.484 06:48:01 -- common/autotest_common.sh@955 -- # kill 87617 00:22:50.484 06:48:01 -- common/autotest_common.sh@960 -- # wait 87617 00:22:50.484 [2024-11-28 06:48:01.100934] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:22:50.484 [2024-11-28 06:48:01.104044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.484 [2024-11-28 06:48:01.104078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:22:50.484 [2024-11-28 06:48:01.104088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:22:50.484 [2024-11-28 06:48:01.104094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.484 [2024-11-28 06:48:01.104114] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:22:50.484 [2024-11-28 06:48:01.104512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.484 [2024-11-28 06:48:01.104530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:22:50.484 [2024-11-28 06:48:01.104538] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.387 ms 00:22:50.484 [2024-11-28 06:48:01.104544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.484 [2024-11-28 06:48:01.104851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.484 [2024-11-28 06:48:01.104893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:22:50.484 [2024-11-28 06:48:01.105037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.283 ms 00:22:50.484 [2024-11-28 06:48:01.105055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.484 [2024-11-28 06:48:01.106223] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.484 [2024-11-28 06:48:01.106313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:22:50.484 [2024-11-28 06:48:01.106367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.144 ms 00:22:50.484 [2024-11-28 06:48:01.106385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.484 [2024-11-28 06:48:01.107320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.484 [2024-11-28 06:48:01.107385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:22:50.484 [2024-11-28 06:48:01.107424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.899 ms 00:22:50.484 [2024-11-28 06:48:01.107442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.484 [2024-11-28 06:48:01.108729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.484 [2024-11-28 06:48:01.108823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:22:50.484 [2024-11-28 06:48:01.108860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.238 ms 00:22:50.484 [2024-11-28 06:48:01.108877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.484 [2024-11-28 06:48:01.109995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.484 [2024-11-28 06:48:01.110086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:22:50.484 [2024-11-28 06:48:01.110126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.084 ms 00:22:50.484 [2024-11-28 06:48:01.110143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.484 [2024-11-28 06:48:01.110214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.484 [2024-11-28 06:48:01.110329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:22:50.484 [2024-11-28 06:48:01.110338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:22:50.484 [2024-11-28 06:48:01.110344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.484 [2024-11-28 06:48:01.111429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.484 [2024-11-28 06:48:01.111452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:22:50.484 [2024-11-28 06:48:01.111459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.071 ms 00:22:50.484 [2024-11-28 06:48:01.111464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.484 [2024-11-28 06:48:01.112680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.484 [2024-11-28 06:48:01.112745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:22:50.484 [2024-11-28 06:48:01.112754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.190 ms 00:22:50.484 [2024-11-28 06:48:01.112759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.484 [2024-11-28 06:48:01.113954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.484 [2024-11-28 06:48:01.113984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:22:50.484 [2024-11-28 06:48:01.113991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.167 ms 00:22:50.484 [2024-11-28 06:48:01.113997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.484 [2024-11-28 06:48:01.114940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.484 [2024-11-28 06:48:01.114967] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:22:50.484 [2024-11-28 06:48:01.114974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.895 ms 00:22:50.484 [2024-11-28 06:48:01.114980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.484 [2024-11-28 06:48:01.115006] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:22:50.484 [2024-11-28 06:48:01.115017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:22:50.484 [2024-11-28 06:48:01.115026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:22:50.484 [2024-11-28 06:48:01.115033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:22:50.484 [2024-11-28 06:48:01.115040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:50.484 [2024-11-28 06:48:01.115046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:50.484 [2024-11-28 06:48:01.115052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:50.484 [2024-11-28 06:48:01.115057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:50.484 [2024-11-28 06:48:01.115063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:50.484 [2024-11-28 06:48:01.115070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:50.484 [2024-11-28 06:48:01.115076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:50.484 [2024-11-28 06:48:01.115082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:50.484 [2024-11-28 06:48:01.115088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:50.484 [2024-11-28 06:48:01.115094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:50.484 [2024-11-28 06:48:01.115100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:50.484 [2024-11-28 06:48:01.115106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:50.484 [2024-11-28 06:48:01.115112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:50.484 [2024-11-28 06:48:01.115118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:50.484 [2024-11-28 06:48:01.115124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:50.484 [2024-11-28 06:48:01.115132] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:22:50.484 [2024-11-28 06:48:01.115138] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: ae285c28-66e5-4575-8dfb-6dd67c3fe0fb 00:22:50.484 [2024-11-28 06:48:01.115144] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:22:50.484 [2024-11-28 06:48:01.115155] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:22:50.485 [2024-11-28 06:48:01.115163] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:22:50.485 [2024-11-28 06:48:01.115169] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:22:50.485 [2024-11-28 06:48:01.115177] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:22:50.485 [2024-11-28 06:48:01.115183] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:22:50.485 [2024-11-28 06:48:01.115189] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:22:50.485 [2024-11-28 06:48:01.115194] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:22:50.485 [2024-11-28 06:48:01.115198] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:22:50.485 [2024-11-28 06:48:01.115206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.485 [2024-11-28 06:48:01.115212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:22:50.485 [2024-11-28 06:48:01.115219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.201 ms 00:22:50.485 [2024-11-28 06:48:01.115225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.485 [2024-11-28 06:48:01.116499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.485 [2024-11-28 06:48:01.116525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:22:50.485 [2024-11-28 06:48:01.116538] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.261 ms 00:22:50.485 [2024-11-28 06:48:01.116545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.485 [2024-11-28 06:48:01.116594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.485 [2024-11-28 06:48:01.116601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:22:50.485 [2024-11-28 06:48:01.116607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:22:50.485 [2024-11-28 06:48:01.116616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.485 [2024-11-28 06:48:01.121232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:50.485 [2024-11-28 06:48:01.121263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:22:50.485 [2024-11-28 06:48:01.121270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:50.485 [2024-11-28 06:48:01.121276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.485 [2024-11-28 06:48:01.121301] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:50.485 [2024-11-28 06:48:01.121308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:22:50.485 [2024-11-28 06:48:01.121315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:50.485 [2024-11-28 06:48:01.121320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.485 [2024-11-28 06:48:01.121371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:50.485 [2024-11-28 06:48:01.121380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:22:50.485 [2024-11-28 06:48:01.121389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:50.485 [2024-11-28 06:48:01.121396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.485 [2024-11-28 06:48:01.121414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:50.485 [2024-11-28 06:48:01.121424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:22:50.485 [2024-11-28 06:48:01.121431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:50.485 [2024-11-28 06:48:01.121439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.485 [2024-11-28 06:48:01.129591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:50.485 [2024-11-28 06:48:01.129624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:22:50.485 [2024-11-28 06:48:01.129636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:50.485 [2024-11-28 06:48:01.129642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.485 [2024-11-28 06:48:01.132812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:50.485 [2024-11-28 06:48:01.132840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:22:50.485 [2024-11-28 06:48:01.132855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:50.485 [2024-11-28 06:48:01.132861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.485 [2024-11-28 06:48:01.132893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:50.485 [2024-11-28 06:48:01.132900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:22:50.485 [2024-11-28 06:48:01.132908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:50.485 [2024-11-28 06:48:01.132917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.485 [2024-11-28 06:48:01.132951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:50.485 [2024-11-28 06:48:01.132958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:22:50.485 [2024-11-28 06:48:01.132964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:50.485 [2024-11-28 06:48:01.132970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.485 [2024-11-28 06:48:01.133024] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:50.485 [2024-11-28 06:48:01.133032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:22:50.485 [2024-11-28 06:48:01.133038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:50.485 [2024-11-28 06:48:01.133045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.485 [2024-11-28 06:48:01.133071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:50.485 [2024-11-28 06:48:01.133077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:22:50.485 [2024-11-28 06:48:01.133083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:50.485 [2024-11-28 06:48:01.133093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.485 [2024-11-28 06:48:01.133124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:50.485 [2024-11-28 06:48:01.133132] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:22:50.485 [2024-11-28 06:48:01.133138] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:50.485 [2024-11-28 06:48:01.133144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.485 [2024-11-28 06:48:01.133178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:50.485 [2024-11-28 06:48:01.133187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:22:50.485 [2024-11-28 06:48:01.133193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:50.485 [2024-11-28 06:48:01.133199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.485 [2024-11-28 06:48:01.133293] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 29.227 ms, result 0 00:22:50.744 06:48:01 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:22:50.744 06:48:01 -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:50.744 06:48:01 -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:22:50.744 06:48:01 -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:22:50.744 06:48:01 -- ftl/common.sh@181 -- # [[ -n '' ]] 00:22:50.744 06:48:01 -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:50.744 Remove shared memory files 00:22:50.744 06:48:01 -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:22:50.744 06:48:01 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:50.744 06:48:01 -- ftl/common.sh@205 -- # rm -f rm -f 00:22:50.744 06:48:01 -- ftl/common.sh@206 -- # rm -f rm -f 00:22:50.744 06:48:01 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid87480 00:22:50.744 06:48:01 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:50.744 06:48:01 -- ftl/common.sh@209 -- # rm -f rm -f 00:22:50.744 ************************************ 00:22:50.744 END TEST ftl_upgrade_shutdown 00:22:50.744 ************************************ 00:22:50.744 00:22:50.744 real 1m6.197s 00:22:50.744 user 1m30.792s 00:22:50.744 sys 0m16.687s 00:22:50.744 06:48:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:50.744 06:48:01 -- common/autotest_common.sh@10 -- # set +x 00:22:50.744 Process with pid 81623 is not found 00:22:50.744 06:48:01 -- ftl/ftl.sh@82 -- # '[' -eq 1 ']' 00:22:50.744 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 82: [: -eq: unary operator expected 00:22:50.744 06:48:01 -- ftl/ftl.sh@89 -- # '[' -eq 1 ']' 00:22:50.744 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 89: [: -eq: unary operator expected 00:22:50.744 06:48:01 -- ftl/ftl.sh@1 -- # at_ftl_exit 00:22:50.744 06:48:01 -- ftl/ftl.sh@14 -- # killprocess 81623 00:22:50.744 06:48:01 -- common/autotest_common.sh@936 -- # '[' -z 81623 ']' 00:22:50.744 06:48:01 -- common/autotest_common.sh@940 -- # kill -0 81623 00:22:50.744 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (81623) - No such process 00:22:50.744 06:48:01 -- common/autotest_common.sh@963 -- # echo 'Process with pid 81623 is not found' 00:22:50.744 06:48:01 -- ftl/ftl.sh@17 -- # [[ -n 0000:00:07.0 ]] 00:22:50.744 06:48:01 -- ftl/ftl.sh@19 -- # spdk_tgt_pid=87812 00:22:50.744 06:48:01 -- ftl/ftl.sh@20 -- # waitforlisten 87812 00:22:50.744 06:48:01 -- common/autotest_common.sh@829 -- # '[' -z 87812 ']' 00:22:50.744 06:48:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.744 06:48:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:50.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.744 06:48:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.744 06:48:01 -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:50.744 06:48:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:50.744 06:48:01 -- common/autotest_common.sh@10 -- # set +x 00:22:50.744 [2024-11-28 06:48:01.496913] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:50.744 [2024-11-28 06:48:01.497003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87812 ] 00:22:51.002 [2024-11-28 06:48:01.627101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.002 [2024-11-28 06:48:01.654875] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:51.002 [2024-11-28 06:48:01.655170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.568 06:48:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:51.568 06:48:02 -- common/autotest_common.sh@862 -- # return 0 00:22:51.568 06:48:02 -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:22:51.826 nvme0n1 00:22:51.826 06:48:02 -- ftl/ftl.sh@22 -- # clear_lvols 00:22:51.826 06:48:02 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:51.826 06:48:02 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:52.084 06:48:02 -- ftl/common.sh@28 -- # stores=24fc45f9-de1c-483e-ad47-3053d89879aa 00:22:52.084 06:48:02 -- ftl/common.sh@29 -- # for lvs in $stores 00:22:52.084 06:48:02 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 24fc45f9-de1c-483e-ad47-3053d89879aa 00:22:52.380 06:48:02 -- ftl/ftl.sh@23 -- # killprocess 87812 00:22:52.380 06:48:02 -- common/autotest_common.sh@936 -- # '[' -z 87812 ']' 00:22:52.380 06:48:02 -- common/autotest_common.sh@940 -- # kill -0 87812 00:22:52.380 06:48:02 -- common/autotest_common.sh@941 -- # uname 00:22:52.380 06:48:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:52.380 06:48:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87812 00:22:52.380 killing process with pid 87812 00:22:52.380 06:48:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:52.380 06:48:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:52.380 06:48:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87812' 00:22:52.380 06:48:02 -- common/autotest_common.sh@955 -- # kill 87812 00:22:52.380 06:48:02 -- common/autotest_common.sh@960 -- # wait 87812 00:22:52.662 06:48:03 -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:52.662 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:52.662 Waiting for block devices as requested 00:22:52.920 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:22:52.920 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:22:52.920 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:22:52.920 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:22:58.193 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:22:58.193 Remove shared memory files 00:22:58.193 06:48:08 -- ftl/ftl.sh@28 -- # remove_shm 00:22:58.193 06:48:08 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:58.193 06:48:08 -- ftl/common.sh@205 -- # rm -f rm -f 00:22:58.193 06:48:08 -- ftl/common.sh@206 -- # rm -f rm -f 00:22:58.193 06:48:08 -- ftl/common.sh@207 -- # rm -f rm -f 00:22:58.193 06:48:08 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:58.193 06:48:08 -- ftl/common.sh@209 -- # rm -f rm -f 00:22:58.193 ************************************ 00:22:58.193 END TEST ftl 00:22:58.193 ************************************ 00:22:58.193 00:22:58.193 real 8m32.631s 00:22:58.193 user 10m26.705s 00:22:58.193 sys 0m58.985s 00:22:58.193 06:48:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:58.193 06:48:08 -- common/autotest_common.sh@10 -- # set +x 00:22:58.193 06:48:08 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:22:58.193 06:48:08 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:22:58.193 06:48:08 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:22:58.193 06:48:08 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:22:58.193 06:48:08 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:22:58.193 06:48:08 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:22:58.193 06:48:08 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:22:58.193 06:48:08 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:22:58.193 06:48:08 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:22:58.193 06:48:08 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:22:58.193 06:48:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:58.193 06:48:08 -- common/autotest_common.sh@10 -- # set +x 00:22:58.193 06:48:08 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:22:58.193 06:48:08 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:22:58.193 06:48:08 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:22:58.193 06:48:08 -- common/autotest_common.sh@10 -- # set +x 00:22:59.573 INFO: APP EXITING 00:22:59.573 INFO: killing all VMs 00:22:59.573 INFO: killing vhost app 00:22:59.573 INFO: EXIT DONE 00:23:00.144 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:00.144 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:23:00.144 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:23:00.144 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:23:00.144 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:23:01.087 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:01.087 Cleaning 00:23:01.087 Removing: /var/run/dpdk/spdk0/config 00:23:01.087 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:23:01.087 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:23:01.087 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:23:01.087 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:23:01.087 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:23:01.087 Removing: /var/run/dpdk/spdk0/hugepage_info 00:23:01.087 Removing: /var/run/dpdk/spdk0 00:23:01.087 Removing: /var/run/dpdk/spdk_pid68067 00:23:01.087 Removing: /var/run/dpdk/spdk_pid68241 00:23:01.087 Removing: /var/run/dpdk/spdk_pid68535 00:23:01.087 Removing: /var/run/dpdk/spdk_pid68602 00:23:01.087 Removing: /var/run/dpdk/spdk_pid68686 00:23:01.087 Removing: /var/run/dpdk/spdk_pid68774 00:23:01.087 Removing: /var/run/dpdk/spdk_pid68859 00:23:01.087 Removing: /var/run/dpdk/spdk_pid68893 00:23:01.087 Removing: /var/run/dpdk/spdk_pid68935 00:23:01.087 Removing: /var/run/dpdk/spdk_pid68999 00:23:01.087 Removing: /var/run/dpdk/spdk_pid69072 00:23:01.087 Removing: /var/run/dpdk/spdk_pid69496 00:23:01.087 Removing: /var/run/dpdk/spdk_pid69538 00:23:01.087 Removing: /var/run/dpdk/spdk_pid69585 00:23:01.087 Removing: /var/run/dpdk/spdk_pid69595 00:23:01.087 Removing: /var/run/dpdk/spdk_pid69653 00:23:01.087 Removing: /var/run/dpdk/spdk_pid69669 00:23:01.087 Removing: /var/run/dpdk/spdk_pid69727 00:23:01.087 Removing: /var/run/dpdk/spdk_pid69743 00:23:01.087 Removing: /var/run/dpdk/spdk_pid69791 00:23:01.087 Removing: /var/run/dpdk/spdk_pid69803 00:23:01.087 Removing: /var/run/dpdk/spdk_pid69845 00:23:01.087 Removing: /var/run/dpdk/spdk_pid69863 00:23:01.087 Removing: /var/run/dpdk/spdk_pid69989 00:23:01.087 Removing: /var/run/dpdk/spdk_pid70026 00:23:01.087 Removing: /var/run/dpdk/spdk_pid70108 00:23:01.087 Removing: /var/run/dpdk/spdk_pid70156 00:23:01.087 Removing: /var/run/dpdk/spdk_pid70182 00:23:01.087 Removing: /var/run/dpdk/spdk_pid70243 00:23:01.087 Removing: /var/run/dpdk/spdk_pid70264 00:23:01.087 Removing: /var/run/dpdk/spdk_pid70299 00:23:01.087 Removing: /var/run/dpdk/spdk_pid70314 00:23:01.087 Removing: /var/run/dpdk/spdk_pid70350 00:23:01.087 Removing: /var/run/dpdk/spdk_pid70370 00:23:01.087 Removing: /var/run/dpdk/spdk_pid70406 00:23:01.087 Removing: /var/run/dpdk/spdk_pid70421 00:23:01.087 Removing: /var/run/dpdk/spdk_pid70456 00:23:01.087 Removing: /var/run/dpdk/spdk_pid70477 00:23:01.087 Removing: /var/run/dpdk/spdk_pid70507 00:23:01.087 Removing: /var/run/dpdk/spdk_pid70527 00:23:01.087 Removing: /var/run/dpdk/spdk_pid70563 00:23:01.087 Removing: /var/run/dpdk/spdk_pid70578 00:23:01.087 Removing: /var/run/dpdk/spdk_pid70613 00:23:01.087 Removing: /var/run/dpdk/spdk_pid70634 00:23:01.087 Removing: /var/run/dpdk/spdk_pid70664 00:23:01.087 Removing: /var/run/dpdk/spdk_pid70690 00:23:01.087 Removing: /var/run/dpdk/spdk_pid70720 00:23:01.087 Removing: /var/run/dpdk/spdk_pid70735 00:23:01.087 Removing: /var/run/dpdk/spdk_pid70776 00:23:01.087 Removing: /var/run/dpdk/spdk_pid70791 00:23:01.087 Removing: /var/run/dpdk/spdk_pid70821 00:23:01.087 Removing: /var/run/dpdk/spdk_pid70847 00:23:01.087 Removing: /var/run/dpdk/spdk_pid70879 00:23:01.087 Removing: /var/run/dpdk/spdk_pid70894 00:23:01.087 Removing: /var/run/dpdk/spdk_pid70935 00:23:01.087 Removing: /var/run/dpdk/spdk_pid70950 00:23:01.087 Removing: /var/run/dpdk/spdk_pid70987 00:23:01.087 Removing: /var/run/dpdk/spdk_pid71007 00:23:01.087 Removing: /var/run/dpdk/spdk_pid71037 00:23:01.087 Removing: /var/run/dpdk/spdk_pid71060 00:23:01.087 Removing: /var/run/dpdk/spdk_pid71095 00:23:01.087 Removing: /var/run/dpdk/spdk_pid71113 00:23:01.087 Removing: /var/run/dpdk/spdk_pid71152 00:23:01.087 Removing: /var/run/dpdk/spdk_pid71175 00:23:01.349 Removing: /var/run/dpdk/spdk_pid71214 00:23:01.349 Removing: /var/run/dpdk/spdk_pid71229 00:23:01.349 Removing: /var/run/dpdk/spdk_pid71266 00:23:01.349 Removing: /var/run/dpdk/spdk_pid71287 00:23:01.349 Removing: /var/run/dpdk/spdk_pid71323 00:23:01.349 Removing: /var/run/dpdk/spdk_pid71396 00:23:01.349 Removing: /var/run/dpdk/spdk_pid71497 00:23:01.349 Removing: /var/run/dpdk/spdk_pid71656 00:23:01.349 Removing: /var/run/dpdk/spdk_pid71723 00:23:01.349 Removing: /var/run/dpdk/spdk_pid71749 00:23:01.349 Removing: /var/run/dpdk/spdk_pid72171 00:23:01.349 Removing: /var/run/dpdk/spdk_pid72507 00:23:01.349 Removing: /var/run/dpdk/spdk_pid72610 00:23:01.349 Removing: /var/run/dpdk/spdk_pid72647 00:23:01.349 Removing: /var/run/dpdk/spdk_pid72672 00:23:01.349 Removing: /var/run/dpdk/spdk_pid72744 00:23:01.349 Removing: /var/run/dpdk/spdk_pid73379 00:23:01.349 Removing: /var/run/dpdk/spdk_pid73410 00:23:01.349 Removing: /var/run/dpdk/spdk_pid73862 00:23:01.349 Removing: /var/run/dpdk/spdk_pid73997 00:23:01.349 Removing: /var/run/dpdk/spdk_pid74095 00:23:01.349 Removing: /var/run/dpdk/spdk_pid74137 00:23:01.349 Removing: /var/run/dpdk/spdk_pid74163 00:23:01.349 Removing: /var/run/dpdk/spdk_pid74183 00:23:01.349 Removing: /var/run/dpdk/spdk_pid76087 00:23:01.349 Removing: /var/run/dpdk/spdk_pid76202 00:23:01.349 Removing: /var/run/dpdk/spdk_pid76211 00:23:01.349 Removing: /var/run/dpdk/spdk_pid76224 00:23:01.349 Removing: /var/run/dpdk/spdk_pid76281 00:23:01.349 Removing: /var/run/dpdk/spdk_pid76285 00:23:01.349 Removing: /var/run/dpdk/spdk_pid76297 00:23:01.349 Removing: /var/run/dpdk/spdk_pid76373 00:23:01.349 Removing: /var/run/dpdk/spdk_pid76377 00:23:01.349 Removing: /var/run/dpdk/spdk_pid76389 00:23:01.349 Removing: /var/run/dpdk/spdk_pid76478 00:23:01.349 Removing: /var/run/dpdk/spdk_pid76482 00:23:01.349 Removing: /var/run/dpdk/spdk_pid76499 00:23:01.349 Removing: /var/run/dpdk/spdk_pid77928 00:23:01.349 Removing: /var/run/dpdk/spdk_pid78013 00:23:01.349 Removing: /var/run/dpdk/spdk_pid78131 00:23:01.349 Removing: /var/run/dpdk/spdk_pid78192 00:23:01.349 Removing: /var/run/dpdk/spdk_pid78246 00:23:01.349 Removing: /var/run/dpdk/spdk_pid78301 00:23:01.349 Removing: /var/run/dpdk/spdk_pid78373 00:23:01.349 Removing: /var/run/dpdk/spdk_pid78440 00:23:01.349 Removing: /var/run/dpdk/spdk_pid78577 00:23:01.349 Removing: /var/run/dpdk/spdk_pid78941 00:23:01.349 Removing: /var/run/dpdk/spdk_pid78966 00:23:01.349 Removing: /var/run/dpdk/spdk_pid79397 00:23:01.349 Removing: /var/run/dpdk/spdk_pid79567 00:23:01.349 Removing: /var/run/dpdk/spdk_pid79668 00:23:01.349 Removing: /var/run/dpdk/spdk_pid79756 00:23:01.349 Removing: /var/run/dpdk/spdk_pid79798 00:23:01.349 Removing: /var/run/dpdk/spdk_pid79818 00:23:01.349 Removing: /var/run/dpdk/spdk_pid80235 00:23:01.349 Removing: /var/run/dpdk/spdk_pid80263 00:23:01.349 Removing: /var/run/dpdk/spdk_pid80319 00:23:01.349 Removing: /var/run/dpdk/spdk_pid80680 00:23:01.349 Removing: /var/run/dpdk/spdk_pid80827 00:23:01.349 Removing: /var/run/dpdk/spdk_pid81623 00:23:01.349 Removing: /var/run/dpdk/spdk_pid81738 00:23:01.349 Removing: /var/run/dpdk/spdk_pid81909 00:23:01.349 Removing: /var/run/dpdk/spdk_pid81984 00:23:01.349 Removing: /var/run/dpdk/spdk_pid82255 00:23:01.349 Removing: /var/run/dpdk/spdk_pid82476 00:23:01.349 Removing: /var/run/dpdk/spdk_pid82821 00:23:01.349 Removing: /var/run/dpdk/spdk_pid82970 00:23:01.349 Removing: /var/run/dpdk/spdk_pid83050 00:23:01.349 Removing: /var/run/dpdk/spdk_pid83091 00:23:01.349 Removing: /var/run/dpdk/spdk_pid83246 00:23:01.349 Removing: /var/run/dpdk/spdk_pid83260 00:23:01.349 Removing: /var/run/dpdk/spdk_pid83296 00:23:01.349 Removing: /var/run/dpdk/spdk_pid83499 00:23:01.349 Removing: /var/run/dpdk/spdk_pid83713 00:23:01.349 Removing: /var/run/dpdk/spdk_pid83984 00:23:01.349 Removing: /var/run/dpdk/spdk_pid84347 00:23:01.349 Removing: /var/run/dpdk/spdk_pid84648 00:23:01.349 Removing: /var/run/dpdk/spdk_pid85063 00:23:01.349 Removing: /var/run/dpdk/spdk_pid85188 00:23:01.349 Removing: /var/run/dpdk/spdk_pid85265 00:23:01.349 Removing: /var/run/dpdk/spdk_pid85623 00:23:01.349 Removing: /var/run/dpdk/spdk_pid85669 00:23:01.349 Removing: /var/run/dpdk/spdk_pid85960 00:23:01.349 Removing: /var/run/dpdk/spdk_pid86251 00:23:01.349 Removing: /var/run/dpdk/spdk_pid86975 00:23:01.349 Removing: /var/run/dpdk/spdk_pid87080 00:23:01.349 Removing: /var/run/dpdk/spdk_pid87117 00:23:01.349 Removing: /var/run/dpdk/spdk_pid87172 00:23:01.349 Removing: /var/run/dpdk/spdk_pid87223 00:23:01.349 Removing: /var/run/dpdk/spdk_pid87271 00:23:01.349 Removing: /var/run/dpdk/spdk_pid87480 00:23:01.349 Removing: /var/run/dpdk/spdk_pid87510 00:23:01.349 Removing: /var/run/dpdk/spdk_pid87567 00:23:01.349 Removing: /var/run/dpdk/spdk_pid87617 00:23:01.349 Removing: /var/run/dpdk/spdk_pid87647 00:23:01.349 Removing: /var/run/dpdk/spdk_pid87704 00:23:01.349 Removing: /var/run/dpdk/spdk_pid87812 00:23:01.349 Clean 00:23:01.610 killing process with pid 60277 00:23:01.610 killing process with pid 60280 00:23:01.610 06:48:12 -- common/autotest_common.sh@1446 -- # return 0 00:23:01.610 06:48:12 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:23:01.610 06:48:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:01.610 06:48:12 -- common/autotest_common.sh@10 -- # set +x 00:23:01.610 06:48:12 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:23:01.610 06:48:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:01.610 06:48:12 -- common/autotest_common.sh@10 -- # set +x 00:23:01.610 06:48:12 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:01.610 06:48:12 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:23:01.610 06:48:12 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:23:01.610 06:48:12 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:23:01.610 06:48:12 -- spdk/autotest.sh@383 -- # hostname 00:23:01.610 06:48:12 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:23:01.871 geninfo: WARNING: invalid characters removed from testname! 00:23:28.446 06:48:35 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:28.446 06:48:38 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:30.353 06:48:40 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:32.883 06:48:43 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:34.782 06:48:45 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:36.679 06:48:47 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:38.578 06:48:48 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:23:38.578 06:48:48 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:23:38.578 06:48:48 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:23:38.578 06:48:48 -- common/autotest_common.sh@1690 -- $ lcov --version 00:23:38.578 06:48:49 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:23:38.578 06:48:49 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:23:38.578 06:48:49 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:23:38.578 06:48:49 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:23:38.578 06:48:49 -- scripts/common.sh@335 -- $ IFS=.-: 00:23:38.578 06:48:49 -- scripts/common.sh@335 -- $ read -ra ver1 00:23:38.578 06:48:49 -- scripts/common.sh@336 -- $ IFS=.-: 00:23:38.578 06:48:49 -- scripts/common.sh@336 -- $ read -ra ver2 00:23:38.578 06:48:49 -- scripts/common.sh@337 -- $ local 'op=<' 00:23:38.578 06:48:49 -- scripts/common.sh@339 -- $ ver1_l=2 00:23:38.578 06:48:49 -- scripts/common.sh@340 -- $ ver2_l=1 00:23:38.578 06:48:49 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:23:38.578 06:48:49 -- scripts/common.sh@343 -- $ case "$op" in 00:23:38.578 06:48:49 -- scripts/common.sh@344 -- $ : 1 00:23:38.578 06:48:49 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:23:38.578 06:48:49 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:38.578 06:48:49 -- scripts/common.sh@364 -- $ decimal 1 00:23:38.578 06:48:49 -- scripts/common.sh@352 -- $ local d=1 00:23:38.578 06:48:49 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:23:38.578 06:48:49 -- scripts/common.sh@354 -- $ echo 1 00:23:38.578 06:48:49 -- scripts/common.sh@364 -- $ ver1[v]=1 00:23:38.579 06:48:49 -- scripts/common.sh@365 -- $ decimal 2 00:23:38.579 06:48:49 -- scripts/common.sh@352 -- $ local d=2 00:23:38.579 06:48:49 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:23:38.579 06:48:49 -- scripts/common.sh@354 -- $ echo 2 00:23:38.579 06:48:49 -- scripts/common.sh@365 -- $ ver2[v]=2 00:23:38.579 06:48:49 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:23:38.579 06:48:49 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:23:38.579 06:48:49 -- scripts/common.sh@367 -- $ return 0 00:23:38.579 06:48:49 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:38.579 06:48:49 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:23:38.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.579 --rc genhtml_branch_coverage=1 00:23:38.579 --rc genhtml_function_coverage=1 00:23:38.579 --rc genhtml_legend=1 00:23:38.579 --rc geninfo_all_blocks=1 00:23:38.579 --rc geninfo_unexecuted_blocks=1 00:23:38.579 00:23:38.579 ' 00:23:38.579 06:48:49 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:23:38.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.579 --rc genhtml_branch_coverage=1 00:23:38.579 --rc genhtml_function_coverage=1 00:23:38.579 --rc genhtml_legend=1 00:23:38.579 --rc geninfo_all_blocks=1 00:23:38.579 --rc geninfo_unexecuted_blocks=1 00:23:38.579 00:23:38.579 ' 00:23:38.579 06:48:49 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:23:38.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.579 --rc genhtml_branch_coverage=1 00:23:38.579 --rc genhtml_function_coverage=1 00:23:38.579 --rc genhtml_legend=1 00:23:38.579 --rc geninfo_all_blocks=1 00:23:38.579 --rc geninfo_unexecuted_blocks=1 00:23:38.579 00:23:38.579 ' 00:23:38.579 06:48:49 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:23:38.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.579 --rc genhtml_branch_coverage=1 00:23:38.579 --rc genhtml_function_coverage=1 00:23:38.579 --rc genhtml_legend=1 00:23:38.579 --rc geninfo_all_blocks=1 00:23:38.579 --rc geninfo_unexecuted_blocks=1 00:23:38.579 00:23:38.579 ' 00:23:38.579 06:48:49 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:38.579 06:48:49 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:23:38.579 06:48:49 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:38.579 06:48:49 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:38.579 06:48:49 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.579 06:48:49 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.579 06:48:49 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.579 06:48:49 -- paths/export.sh@5 -- $ export PATH 00:23:38.579 06:48:49 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.579 06:48:49 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:23:38.579 06:48:49 -- common/autobuild_common.sh@440 -- $ date +%s 00:23:38.579 06:48:49 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732776529.XXXXXX 00:23:38.579 06:48:49 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732776529.xcZuyh 00:23:38.579 06:48:49 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:23:38.579 06:48:49 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:23:38.579 06:48:49 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:23:38.579 06:48:49 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:23:38.579 06:48:49 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:23:38.579 06:48:49 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:23:38.579 06:48:49 -- common/autobuild_common.sh@456 -- $ get_config_params 00:23:38.579 06:48:49 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:23:38.579 06:48:49 -- common/autotest_common.sh@10 -- $ set +x 00:23:38.579 06:48:49 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:23:38.579 06:48:49 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:23:38.579 06:48:49 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:23:38.579 06:48:49 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:23:38.579 06:48:49 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:23:38.579 06:48:49 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:23:38.579 06:48:49 -- spdk/autopackage.sh@19 -- $ timing_finish 00:23:38.579 06:48:49 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:23:38.579 06:48:49 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:23:38.579 06:48:49 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:38.579 06:48:49 -- spdk/autopackage.sh@20 -- $ exit 0 00:23:38.579 + [[ -n 5717 ]] 00:23:38.579 + sudo kill 5717 00:23:38.588 [Pipeline] } 00:23:38.602 [Pipeline] // timeout 00:23:38.607 [Pipeline] } 00:23:38.622 [Pipeline] // stage 00:23:38.627 [Pipeline] } 00:23:38.642 [Pipeline] // catchError 00:23:38.652 [Pipeline] stage 00:23:38.654 [Pipeline] { (Stop VM) 00:23:38.670 [Pipeline] sh 00:23:38.950 + vagrant halt 00:23:41.482 ==> default: Halting domain... 00:23:46.826 [Pipeline] sh 00:23:47.109 + vagrant destroy -f 00:23:50.423 ==> default: Removing domain... 00:23:50.699 [Pipeline] sh 00:23:50.984 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:23:50.996 [Pipeline] } 00:23:51.010 [Pipeline] // stage 00:23:51.016 [Pipeline] } 00:23:51.030 [Pipeline] // dir 00:23:51.036 [Pipeline] } 00:23:51.051 [Pipeline] // wrap 00:23:51.057 [Pipeline] } 00:23:51.070 [Pipeline] // catchError 00:23:51.079 [Pipeline] stage 00:23:51.082 [Pipeline] { (Epilogue) 00:23:51.094 [Pipeline] sh 00:23:51.380 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:56.679 [Pipeline] catchError 00:23:56.681 [Pipeline] { 00:23:56.694 [Pipeline] sh 00:23:56.979 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:56.979 Artifacts sizes are good 00:23:56.997 [Pipeline] } 00:23:57.012 [Pipeline] // catchError 00:23:57.026 [Pipeline] archiveArtifacts 00:23:57.035 Archiving artifacts 00:23:57.138 [Pipeline] cleanWs 00:23:57.150 [WS-CLEANUP] Deleting project workspace... 00:23:57.150 [WS-CLEANUP] Deferred wipeout is used... 00:23:57.156 [WS-CLEANUP] done 00:23:57.158 [Pipeline] } 00:23:57.173 [Pipeline] // stage 00:23:57.178 [Pipeline] } 00:23:57.191 [Pipeline] // node 00:23:57.195 [Pipeline] End of Pipeline 00:23:57.235 Finished: SUCCESS