00:00:00.000 Started by upstream project "autotest-per-patch" build number 126156 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.038 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:03.924 The recommended git tool is: git 00:00:03.924 using credential 00000000-0000-0000-0000-000000000002 00:00:03.926 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:03.939 Fetching changes from the remote Git repository 00:00:03.941 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:03.955 Using shallow fetch with depth 1 00:00:03.955 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:03.955 > git --version # timeout=10 00:00:03.969 > git --version # 'git version 2.39.2' 00:00:03.969 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:03.984 Setting http proxy: proxy-dmz.intel.com:911 00:00:03.984 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.893 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.905 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.918 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:08.918 > git config core.sparsecheckout # timeout=10 00:00:08.932 > git read-tree -mu HEAD # timeout=10 00:00:08.948 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:08.968 Commit message: "inventory: add WCP3 to free inventory" 00:00:08.969 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:09.059 [Pipeline] Start of Pipeline 00:00:09.072 [Pipeline] library 00:00:09.073 Loading library shm_lib@master 00:00:09.073 Library shm_lib@master is cached. Copying from home. 00:00:09.085 [Pipeline] node 00:00:24.087 Still waiting to schedule task 00:00:24.088 ‘CYP7’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘CYP8’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘FCP03’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘FCP04’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘FCP07’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘FCP08’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘FCP09’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘FCP10’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘FCP11’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘FCP12’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘GP10’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘GP13’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘GP14’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘GP15’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘GP16’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘GP18’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘GP19’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘GP20’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘GP21’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘GP22’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘GP4’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘GP5’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘Jenkins’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘ME1’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘ME2’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘ME3’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘PE5’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘SM1’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘SM28’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘SM29’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘SM2’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘SM30’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘SM31’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘SM32’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘SM33’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘SM34’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘SM35’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘SM5’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘SM6’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘SM7’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘SM8’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘VM-host-PE1’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘VM-host-PE2’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘VM-host-PE3’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘VM-host-PE4’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘VM-host-SM18’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘VM-host-WFP1’ is offline 00:00:24.088 ‘VM-host-WFP25’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘WCP0’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘WFP17’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘WFP21’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘WFP28’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘WFP2’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘WFP32’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘WFP34’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘WFP35’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘WFP36’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘WFP37’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘WFP38’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘WFP47’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘WFP49’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘WFP63’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘WFP65’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘WFP66’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘WFP68’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘WFP69’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘WFP9’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘prc_bsc_waikikibeach64’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘spdk-pxe-01’ doesn’t have label ‘vagrant-vm-host’ 00:00:24.088 ‘spdk-pxe-02’ doesn’t have label ‘vagrant-vm-host’ 00:17:17.798 Running on VM-host-SM0 in /var/jenkins/workspace/nvme-vg-autotest 00:17:17.800 [Pipeline] { 00:17:17.813 [Pipeline] catchError 00:17:17.815 [Pipeline] { 00:17:17.829 [Pipeline] wrap 00:17:17.836 [Pipeline] { 00:17:17.843 [Pipeline] stage 00:17:17.844 [Pipeline] { (Prologue) 00:17:17.861 [Pipeline] echo 00:17:17.862 Node: VM-host-SM0 00:17:17.867 [Pipeline] cleanWs 00:17:17.876 [WS-CLEANUP] Deleting project workspace... 00:17:17.876 [WS-CLEANUP] Deferred wipeout is used... 00:17:17.882 [WS-CLEANUP] done 00:17:18.053 [Pipeline] setCustomBuildProperty 00:17:18.149 [Pipeline] httpRequest 00:17:18.163 [Pipeline] echo 00:17:18.164 Sorcerer 10.211.164.101 is alive 00:17:18.170 [Pipeline] httpRequest 00:17:18.173 HttpMethod: GET 00:17:18.174 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:17:18.175 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:17:18.176 Response Code: HTTP/1.1 200 OK 00:17:18.176 Success: Status code 200 is in the accepted range: 200,404 00:17:18.177 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:17:18.319 [Pipeline] sh 00:17:18.621 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:17:18.638 [Pipeline] httpRequest 00:17:18.655 [Pipeline] echo 00:17:18.656 Sorcerer 10.211.164.101 is alive 00:17:18.664 [Pipeline] httpRequest 00:17:18.669 HttpMethod: GET 00:17:18.670 URL: http://10.211.164.101/packages/spdk_9c8eb396d017a27f19ea7bf2fa8a71828b8253da.tar.gz 00:17:18.670 Sending request to url: http://10.211.164.101/packages/spdk_9c8eb396d017a27f19ea7bf2fa8a71828b8253da.tar.gz 00:17:18.671 Response Code: HTTP/1.1 200 OK 00:17:18.671 Success: Status code 200 is in the accepted range: 200,404 00:17:18.672 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_9c8eb396d017a27f19ea7bf2fa8a71828b8253da.tar.gz 00:17:20.834 [Pipeline] sh 00:17:21.112 + tar --no-same-owner -xf spdk_9c8eb396d017a27f19ea7bf2fa8a71828b8253da.tar.gz 00:17:24.405 [Pipeline] sh 00:17:24.684 + git -C spdk log --oneline -n5 00:17:24.684 9c8eb396d test/nvme/perf: Use parse_cpu_list() to parse the cpu config 00:17:24.684 a22f117fe nvme/perf: Use sqthread_poll_cpu for io_uring workloads 00:17:24.684 719d03c6a sock/uring: only register net impl if supported 00:17:24.684 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:17:24.684 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:17:24.707 [Pipeline] writeFile 00:17:24.727 [Pipeline] sh 00:17:25.019 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:17:25.029 [Pipeline] sh 00:17:25.306 + cat autorun-spdk.conf 00:17:25.306 SPDK_RUN_FUNCTIONAL_TEST=1 00:17:25.306 SPDK_TEST_NVME=1 00:17:25.306 SPDK_TEST_FTL=1 00:17:25.306 SPDK_TEST_ISAL=1 00:17:25.306 SPDK_RUN_ASAN=1 00:17:25.306 SPDK_RUN_UBSAN=1 00:17:25.306 SPDK_TEST_XNVME=1 00:17:25.306 SPDK_TEST_NVME_FDP=1 00:17:25.306 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:17:25.312 RUN_NIGHTLY=0 00:17:25.314 [Pipeline] } 00:17:25.332 [Pipeline] // stage 00:17:25.350 [Pipeline] stage 00:17:25.352 [Pipeline] { (Run VM) 00:17:25.367 [Pipeline] sh 00:17:25.645 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:17:25.645 + echo 'Start stage prepare_nvme.sh' 00:17:25.645 Start stage prepare_nvme.sh 00:17:25.645 + [[ -n 0 ]] 00:17:25.645 + disk_prefix=ex0 00:17:25.645 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:17:25.645 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:17:25.645 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:17:25.645 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:17:25.645 ++ SPDK_TEST_NVME=1 00:17:25.645 ++ SPDK_TEST_FTL=1 00:17:25.645 ++ SPDK_TEST_ISAL=1 00:17:25.645 ++ SPDK_RUN_ASAN=1 00:17:25.645 ++ SPDK_RUN_UBSAN=1 00:17:25.645 ++ SPDK_TEST_XNVME=1 00:17:25.645 ++ SPDK_TEST_NVME_FDP=1 00:17:25.645 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:17:25.645 ++ RUN_NIGHTLY=0 00:17:25.645 + cd /var/jenkins/workspace/nvme-vg-autotest 00:17:25.645 + nvme_files=() 00:17:25.645 + declare -A nvme_files 00:17:25.645 + backend_dir=/var/lib/libvirt/images/backends 00:17:25.645 + nvme_files['nvme.img']=5G 00:17:25.645 + nvme_files['nvme-cmb.img']=5G 00:17:25.645 + nvme_files['nvme-multi0.img']=4G 00:17:25.645 + nvme_files['nvme-multi1.img']=4G 00:17:25.645 + nvme_files['nvme-multi2.img']=4G 00:17:25.645 + nvme_files['nvme-openstack.img']=8G 00:17:25.645 + nvme_files['nvme-zns.img']=5G 00:17:25.645 + (( SPDK_TEST_NVME_PMR == 1 )) 00:17:25.645 + (( SPDK_TEST_FTL == 1 )) 00:17:25.645 + nvme_files["nvme-ftl.img"]=6G 00:17:25.645 + (( SPDK_TEST_NVME_FDP == 1 )) 00:17:25.645 + nvme_files["nvme-fdp.img"]=1G 00:17:25.645 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:17:25.645 + for nvme in "${!nvme_files[@]}" 00:17:25.645 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:17:25.645 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:17:25.645 + for nvme in "${!nvme_files[@]}" 00:17:25.645 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-ftl.img -s 6G 00:17:25.645 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:17:25.645 + for nvme in "${!nvme_files[@]}" 00:17:25.645 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:17:26.212 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:17:26.212 + for nvme in "${!nvme_files[@]}" 00:17:26.212 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:17:26.213 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:17:26.213 + for nvme in "${!nvme_files[@]}" 00:17:26.213 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:17:26.471 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:17:26.471 + for nvme in "${!nvme_files[@]}" 00:17:26.471 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:17:26.471 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:17:26.471 + for nvme in "${!nvme_files[@]}" 00:17:26.471 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:17:26.471 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:17:26.471 + for nvme in "${!nvme_files[@]}" 00:17:26.471 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-fdp.img -s 1G 00:17:26.471 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:17:26.471 + for nvme in "${!nvme_files[@]}" 00:17:26.471 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:17:27.406 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:17:27.406 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:17:27.406 + echo 'End stage prepare_nvme.sh' 00:17:27.406 End stage prepare_nvme.sh 00:17:27.419 [Pipeline] sh 00:17:27.700 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:17:27.700 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex0-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:17:27.700 00:17:27.700 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:17:27.700 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:17:27.700 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:17:27.700 HELP=0 00:17:27.700 DRY_RUN=0 00:17:27.700 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,/var/lib/libvirt/images/backends/ex0-nvme-fdp.img, 00:17:27.700 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:17:27.700 NVME_AUTO_CREATE=0 00:17:27.700 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,, 00:17:27.700 NVME_CMB=,,,, 00:17:27.700 NVME_PMR=,,,, 00:17:27.700 NVME_ZNS=,,,, 00:17:27.700 NVME_MS=true,,,, 00:17:27.700 NVME_FDP=,,,on, 00:17:27.700 SPDK_VAGRANT_DISTRO=fedora38 00:17:27.700 SPDK_VAGRANT_VMCPU=10 00:17:27.700 SPDK_VAGRANT_VMRAM=12288 00:17:27.700 SPDK_VAGRANT_PROVIDER=libvirt 00:17:27.700 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:17:27.700 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:17:27.700 SPDK_OPENSTACK_NETWORK=0 00:17:27.700 VAGRANT_PACKAGE_BOX=0 00:17:27.700 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:17:27.700 FORCE_DISTRO=true 00:17:27.700 VAGRANT_BOX_VERSION= 00:17:27.700 EXTRA_VAGRANTFILES= 00:17:27.700 NIC_MODEL=e1000 00:17:27.700 00:17:27.700 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt' 00:17:27.700 /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:17:30.982 Bringing machine 'default' up with 'libvirt' provider... 00:17:31.549 ==> default: Creating image (snapshot of base box volume). 00:17:31.808 ==> default: Creating domain with the following settings... 00:17:31.809 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721028369_89124d85d7310c20e7eb 00:17:31.809 ==> default: -- Domain type: kvm 00:17:31.809 ==> default: -- Cpus: 10 00:17:31.809 ==> default: -- Feature: acpi 00:17:31.809 ==> default: -- Feature: apic 00:17:31.809 ==> default: -- Feature: pae 00:17:31.809 ==> default: -- Memory: 12288M 00:17:31.809 ==> default: -- Memory Backing: hugepages: 00:17:31.809 ==> default: -- Management MAC: 00:17:31.809 ==> default: -- Loader: 00:17:31.809 ==> default: -- Nvram: 00:17:31.809 ==> default: -- Base box: spdk/fedora38 00:17:31.809 ==> default: -- Storage pool: default 00:17:31.809 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721028369_89124d85d7310c20e7eb.img (20G) 00:17:31.809 ==> default: -- Volume Cache: default 00:17:31.809 ==> default: -- Kernel: 00:17:31.809 ==> default: -- Initrd: 00:17:31.809 ==> default: -- Graphics Type: vnc 00:17:31.809 ==> default: -- Graphics Port: -1 00:17:31.809 ==> default: -- Graphics IP: 127.0.0.1 00:17:31.809 ==> default: -- Graphics Password: Not defined 00:17:31.809 ==> default: -- Video Type: cirrus 00:17:31.809 ==> default: -- Video VRAM: 9216 00:17:31.809 ==> default: -- Sound Type: 00:17:31.809 ==> default: -- Keymap: en-us 00:17:31.809 ==> default: -- TPM Path: 00:17:31.809 ==> default: -- INPUT: type=mouse, bus=ps2 00:17:31.809 ==> default: -- Command line args: 00:17:31.809 ==> default: -> value=-device, 00:17:31.809 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:17:31.809 ==> default: -> value=-drive, 00:17:31.809 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:17:31.809 ==> default: -> value=-device, 00:17:31.809 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:17:31.809 ==> default: -> value=-device, 00:17:31.809 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:17:31.809 ==> default: -> value=-drive, 00:17:31.809 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-1-drive0, 00:17:31.809 ==> default: -> value=-device, 00:17:31.809 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:17:31.809 ==> default: -> value=-device, 00:17:31.809 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:17:31.809 ==> default: -> value=-drive, 00:17:31.809 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:17:31.809 ==> default: -> value=-device, 00:17:31.809 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:17:31.809 ==> default: -> value=-drive, 00:17:31.809 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:17:31.809 ==> default: -> value=-device, 00:17:31.809 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:17:31.809 ==> default: -> value=-drive, 00:17:31.809 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:17:31.809 ==> default: -> value=-device, 00:17:31.809 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:17:31.809 ==> default: -> value=-device, 00:17:31.809 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:17:31.809 ==> default: -> value=-device, 00:17:31.809 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:17:31.809 ==> default: -> value=-drive, 00:17:31.809 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:17:31.809 ==> default: -> value=-device, 00:17:31.809 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:17:31.809 ==> default: Creating shared folders metadata... 00:17:31.809 ==> default: Starting domain. 00:17:34.347 ==> default: Waiting for domain to get an IP address... 00:17:49.243 ==> default: Waiting for SSH to become available... 00:17:51.143 ==> default: Configuring and enabling network interfaces... 00:17:55.391 default: SSH address: 192.168.121.176:22 00:17:55.391 default: SSH username: vagrant 00:17:55.391 default: SSH auth method: private key 00:17:57.340 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:18:05.481 ==> default: Mounting SSHFS shared folder... 00:18:06.413 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:18:06.413 ==> default: Checking Mount.. 00:18:07.786 ==> default: Folder Successfully Mounted! 00:18:07.786 ==> default: Running provisioner: file... 00:18:08.352 default: ~/.gitconfig => .gitconfig 00:18:08.918 00:18:08.918 SUCCESS! 00:18:08.918 00:18:08.918 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:18:08.918 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:18:08.918 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:18:08.918 00:18:08.928 [Pipeline] } 00:18:08.946 [Pipeline] // stage 00:18:08.956 [Pipeline] dir 00:18:08.957 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt 00:18:08.959 [Pipeline] { 00:18:08.975 [Pipeline] catchError 00:18:08.977 [Pipeline] { 00:18:08.993 [Pipeline] sh 00:18:09.270 + vagrant ssh-config --host vagrant 00:18:09.270 + sed -ne /^Host/,$p 00:18:09.270 + tee ssh_conf 00:18:13.473 Host vagrant 00:18:13.473 HostName 192.168.121.176 00:18:13.473 User vagrant 00:18:13.473 Port 22 00:18:13.473 UserKnownHostsFile /dev/null 00:18:13.473 StrictHostKeyChecking no 00:18:13.473 PasswordAuthentication no 00:18:13.473 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:18:13.473 IdentitiesOnly yes 00:18:13.473 LogLevel FATAL 00:18:13.473 ForwardAgent yes 00:18:13.473 ForwardX11 yes 00:18:13.473 00:18:13.524 [Pipeline] withEnv 00:18:13.526 [Pipeline] { 00:18:13.540 [Pipeline] sh 00:18:13.829 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:18:13.829 source /etc/os-release 00:18:13.829 [[ -e /image.version ]] && img=$(< /image.version) 00:18:13.829 # Minimal, systemd-like check. 00:18:13.829 if [[ -e /.dockerenv ]]; then 00:18:13.829 # Clear garbage from the node's name: 00:18:13.829 # agt-er_autotest_547-896 -> autotest_547-896 00:18:13.830 # $HOSTNAME is the actual container id 00:18:13.830 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:18:13.830 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:18:13.830 # We can assume this is a mount from a host where container is running, 00:18:13.830 # so fetch its hostname to easily identify the target swarm worker. 00:18:13.830 container="$(< /etc/hostname) ($agent)" 00:18:13.830 else 00:18:13.830 # Fallback 00:18:13.830 container=$agent 00:18:13.830 fi 00:18:13.830 fi 00:18:13.830 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:18:13.830 00:18:14.097 [Pipeline] } 00:18:14.115 [Pipeline] // withEnv 00:18:14.123 [Pipeline] setCustomBuildProperty 00:18:14.136 [Pipeline] stage 00:18:14.138 [Pipeline] { (Tests) 00:18:14.155 [Pipeline] sh 00:18:14.433 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:18:14.702 [Pipeline] sh 00:18:14.998 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:18:15.013 [Pipeline] timeout 00:18:15.013 Timeout set to expire in 40 min 00:18:15.016 [Pipeline] { 00:18:15.029 [Pipeline] sh 00:18:15.305 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:18:15.908 HEAD is now at 9c8eb396d test/nvme/perf: Use parse_cpu_list() to parse the cpu config 00:18:15.920 [Pipeline] sh 00:18:16.198 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:18:16.467 [Pipeline] sh 00:18:16.745 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:18:17.018 [Pipeline] sh 00:18:17.295 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:18:17.554 ++ readlink -f spdk_repo 00:18:17.554 + DIR_ROOT=/home/vagrant/spdk_repo 00:18:17.554 + [[ -n /home/vagrant/spdk_repo ]] 00:18:17.554 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:18:17.554 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:18:17.554 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:18:17.554 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:18:17.554 + [[ -d /home/vagrant/spdk_repo/output ]] 00:18:17.554 + [[ nvme-vg-autotest == pkgdep-* ]] 00:18:17.554 + cd /home/vagrant/spdk_repo 00:18:17.554 + source /etc/os-release 00:18:17.554 ++ NAME='Fedora Linux' 00:18:17.554 ++ VERSION='38 (Cloud Edition)' 00:18:17.554 ++ ID=fedora 00:18:17.554 ++ VERSION_ID=38 00:18:17.554 ++ VERSION_CODENAME= 00:18:17.554 ++ PLATFORM_ID=platform:f38 00:18:17.554 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:18:17.554 ++ ANSI_COLOR='0;38;2;60;110;180' 00:18:17.554 ++ LOGO=fedora-logo-icon 00:18:17.554 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:18:17.554 ++ HOME_URL=https://fedoraproject.org/ 00:18:17.554 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:18:17.554 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:18:17.554 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:18:17.554 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:18:17.554 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:18:17.554 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:18:17.554 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:18:17.554 ++ SUPPORT_END=2024-05-14 00:18:17.554 ++ VARIANT='Cloud Edition' 00:18:17.554 ++ VARIANT_ID=cloud 00:18:17.554 + uname -a 00:18:17.554 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:18:17.554 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:18:17.812 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:18.092 Hugepages 00:18:18.092 node hugesize free / total 00:18:18.092 node0 1048576kB 0 / 0 00:18:18.092 node0 2048kB 0 / 0 00:18:18.092 00:18:18.092 Type BDF Vendor Device NUMA Driver Device Block devices 00:18:18.092 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:18:18.092 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:18:18.350 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:18:18.350 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:18:18.350 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:18:18.350 + rm -f /tmp/spdk-ld-path 00:18:18.350 + source autorun-spdk.conf 00:18:18.350 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:18:18.350 ++ SPDK_TEST_NVME=1 00:18:18.350 ++ SPDK_TEST_FTL=1 00:18:18.350 ++ SPDK_TEST_ISAL=1 00:18:18.350 ++ SPDK_RUN_ASAN=1 00:18:18.350 ++ SPDK_RUN_UBSAN=1 00:18:18.350 ++ SPDK_TEST_XNVME=1 00:18:18.350 ++ SPDK_TEST_NVME_FDP=1 00:18:18.350 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:18:18.350 ++ RUN_NIGHTLY=0 00:18:18.350 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:18:18.350 + [[ -n '' ]] 00:18:18.350 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:18:18.350 + for M in /var/spdk/build-*-manifest.txt 00:18:18.350 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:18:18.350 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:18:18.350 + for M in /var/spdk/build-*-manifest.txt 00:18:18.350 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:18:18.350 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:18:18.350 ++ uname 00:18:18.350 + [[ Linux == \L\i\n\u\x ]] 00:18:18.350 + sudo dmesg -T 00:18:18.350 + sudo dmesg --clear 00:18:18.350 + dmesg_pid=5206 00:18:18.350 + [[ Fedora Linux == FreeBSD ]] 00:18:18.350 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:18:18.350 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:18:18.350 + sudo dmesg -Tw 00:18:18.350 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:18:18.350 + [[ -x /usr/src/fio-static/fio ]] 00:18:18.350 + export FIO_BIN=/usr/src/fio-static/fio 00:18:18.350 + FIO_BIN=/usr/src/fio-static/fio 00:18:18.350 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:18:18.350 + [[ ! -v VFIO_QEMU_BIN ]] 00:18:18.350 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:18:18.350 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:18:18.350 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:18:18.350 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:18:18.351 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:18:18.351 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:18:18.351 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:18:18.351 Test configuration: 00:18:18.351 SPDK_RUN_FUNCTIONAL_TEST=1 00:18:18.351 SPDK_TEST_NVME=1 00:18:18.351 SPDK_TEST_FTL=1 00:18:18.351 SPDK_TEST_ISAL=1 00:18:18.351 SPDK_RUN_ASAN=1 00:18:18.351 SPDK_RUN_UBSAN=1 00:18:18.351 SPDK_TEST_XNVME=1 00:18:18.351 SPDK_TEST_NVME_FDP=1 00:18:18.351 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:18:18.609 RUN_NIGHTLY=0 07:26:56 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:18.609 07:26:56 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:18:18.609 07:26:56 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:18.609 07:26:56 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:18.609 07:26:56 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.609 07:26:56 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.609 07:26:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.609 07:26:56 -- paths/export.sh@5 -- $ export PATH 00:18:18.609 07:26:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.609 07:26:56 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:18:18.609 07:26:56 -- common/autobuild_common.sh@444 -- $ date +%s 00:18:18.609 07:26:56 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721028416.XXXXXX 00:18:18.609 07:26:56 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721028416.fEIig4 00:18:18.609 07:26:56 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:18:18.609 07:26:56 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:18:18.609 07:26:56 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:18:18.609 07:26:56 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:18:18.609 07:26:56 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:18:18.609 07:26:57 -- common/autobuild_common.sh@460 -- $ get_config_params 00:18:18.609 07:26:57 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:18:18.609 07:26:57 -- common/autotest_common.sh@10 -- $ set +x 00:18:18.609 07:26:57 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:18:18.609 07:26:57 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:18:18.609 07:26:57 -- pm/common@17 -- $ local monitor 00:18:18.609 07:26:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:18.609 07:26:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:18.609 07:26:57 -- pm/common@25 -- $ sleep 1 00:18:18.609 07:26:57 -- pm/common@21 -- $ date +%s 00:18:18.609 07:26:57 -- pm/common@21 -- $ date +%s 00:18:18.609 07:26:57 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721028417 00:18:18.609 07:26:57 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721028417 00:18:18.609 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721028417_collect-vmstat.pm.log 00:18:18.609 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721028417_collect-cpu-load.pm.log 00:18:19.544 07:26:58 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:18:19.544 07:26:58 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:18:19.544 07:26:58 -- spdk/autobuild.sh@12 -- $ umask 022 00:18:19.544 07:26:58 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:18:19.544 07:26:58 -- spdk/autobuild.sh@16 -- $ date -u 00:18:19.544 Mon Jul 15 07:26:58 AM UTC 2024 00:18:19.544 07:26:58 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:18:19.544 v24.09-pre-204-g9c8eb396d 00:18:19.544 07:26:58 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:18:19.544 07:26:58 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:18:19.544 07:26:58 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:18:19.544 07:26:58 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:18:19.544 07:26:58 -- common/autotest_common.sh@10 -- $ set +x 00:18:19.544 ************************************ 00:18:19.544 START TEST asan 00:18:19.544 ************************************ 00:18:19.544 using asan 00:18:19.544 07:26:58 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:18:19.544 00:18:19.544 real 0m0.000s 00:18:19.544 user 0m0.000s 00:18:19.544 sys 0m0.000s 00:18:19.544 07:26:58 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:18:19.544 07:26:58 asan -- common/autotest_common.sh@10 -- $ set +x 00:18:19.544 ************************************ 00:18:19.544 END TEST asan 00:18:19.544 ************************************ 00:18:19.544 07:26:58 -- common/autotest_common.sh@1142 -- $ return 0 00:18:19.544 07:26:58 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:18:19.544 07:26:58 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:18:19.544 07:26:58 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:18:19.544 07:26:58 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:18:19.544 07:26:58 -- common/autotest_common.sh@10 -- $ set +x 00:18:19.544 ************************************ 00:18:19.544 START TEST ubsan 00:18:19.544 ************************************ 00:18:19.544 using ubsan 00:18:19.544 07:26:58 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:18:19.544 00:18:19.544 real 0m0.000s 00:18:19.544 user 0m0.000s 00:18:19.544 sys 0m0.000s 00:18:19.544 07:26:58 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:18:19.544 07:26:58 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:18:19.544 ************************************ 00:18:19.544 END TEST ubsan 00:18:19.544 ************************************ 00:18:19.544 07:26:58 -- common/autotest_common.sh@1142 -- $ return 0 00:18:19.544 07:26:58 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:18:19.544 07:26:58 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:18:19.544 07:26:58 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:18:19.544 07:26:58 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:18:19.544 07:26:58 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:18:19.544 07:26:58 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:18:19.544 07:26:58 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:18:19.544 07:26:58 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:18:19.544 07:26:58 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:18:19.803 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:18:19.803 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:18:20.369 Using 'verbs' RDMA provider 00:18:36.170 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:18:48.370 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:18:48.370 Creating mk/config.mk...done. 00:18:48.370 Creating mk/cc.flags.mk...done. 00:18:48.370 Type 'make' to build. 00:18:48.370 07:27:25 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:18:48.370 07:27:25 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:18:48.370 07:27:25 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:18:48.370 07:27:25 -- common/autotest_common.sh@10 -- $ set +x 00:18:48.370 ************************************ 00:18:48.370 START TEST make 00:18:48.370 ************************************ 00:18:48.370 07:27:25 make -- common/autotest_common.sh@1123 -- $ make -j10 00:18:48.370 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:18:48.370 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:18:48.370 meson setup builddir \ 00:18:48.370 -Dwith-libaio=enabled \ 00:18:48.370 -Dwith-liburing=enabled \ 00:18:48.370 -Dwith-libvfn=disabled \ 00:18:48.370 -Dwith-spdk=false && \ 00:18:48.370 meson compile -C builddir && \ 00:18:48.370 cd -) 00:18:48.370 make[1]: Nothing to be done for 'all'. 00:18:50.268 The Meson build system 00:18:50.268 Version: 1.3.1 00:18:50.268 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:18:50.268 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:18:50.268 Build type: native build 00:18:50.268 Project name: xnvme 00:18:50.268 Project version: 0.7.3 00:18:50.268 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:18:50.268 C linker for the host machine: cc ld.bfd 2.39-16 00:18:50.268 Host machine cpu family: x86_64 00:18:50.268 Host machine cpu: x86_64 00:18:50.268 Message: host_machine.system: linux 00:18:50.268 Compiler for C supports arguments -Wno-missing-braces: YES 00:18:50.268 Compiler for C supports arguments -Wno-cast-function-type: YES 00:18:50.268 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:18:50.268 Run-time dependency threads found: YES 00:18:50.268 Has header "setupapi.h" : NO 00:18:50.268 Has header "linux/blkzoned.h" : YES 00:18:50.268 Has header "linux/blkzoned.h" : YES (cached) 00:18:50.268 Has header "libaio.h" : YES 00:18:50.268 Library aio found: YES 00:18:50.268 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:18:50.268 Run-time dependency liburing found: YES 2.2 00:18:50.268 Dependency libvfn skipped: feature with-libvfn disabled 00:18:50.268 Run-time dependency appleframeworks found: NO (tried framework) 00:18:50.268 Run-time dependency appleframeworks found: NO (tried framework) 00:18:50.268 Configuring xnvme_config.h using configuration 00:18:50.268 Configuring xnvme.spec using configuration 00:18:50.268 Run-time dependency bash-completion found: YES 2.11 00:18:50.268 Message: Bash-completions: /usr/share/bash-completion/completions 00:18:50.268 Program cp found: YES (/usr/bin/cp) 00:18:50.268 Has header "winsock2.h" : NO 00:18:50.268 Has header "dbghelp.h" : NO 00:18:50.268 Library rpcrt4 found: NO 00:18:50.268 Library rt found: YES 00:18:50.268 Checking for function "clock_gettime" with dependency -lrt: YES 00:18:50.268 Found CMake: /usr/bin/cmake (3.27.7) 00:18:50.268 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:18:50.268 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:18:50.268 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:18:50.268 Build targets in project: 32 00:18:50.268 00:18:50.268 xnvme 0.7.3 00:18:50.268 00:18:50.268 User defined options 00:18:50.268 with-libaio : enabled 00:18:50.268 with-liburing: enabled 00:18:50.268 with-libvfn : disabled 00:18:50.268 with-spdk : false 00:18:50.268 00:18:50.268 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:18:50.834 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:18:50.834 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:18:50.834 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:18:50.834 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:18:50.834 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:18:50.834 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:18:51.092 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:18:51.092 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:18:51.092 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:18:51.092 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:18:51.092 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:18:51.092 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:18:51.092 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:18:51.092 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:18:51.092 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:18:51.092 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:18:51.092 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:18:51.092 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:18:51.092 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:18:51.092 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:18:51.092 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:18:51.350 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:18:51.350 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:18:51.350 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:18:51.350 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:18:51.350 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:18:51.350 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:18:51.350 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:18:51.350 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:18:51.350 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:18:51.350 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:18:51.350 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:18:51.350 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:18:51.350 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:18:51.350 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:18:51.350 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:18:51.350 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:18:51.350 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:18:51.350 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:18:51.350 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:18:51.350 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:18:51.350 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:18:51.350 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:18:51.350 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:18:51.350 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:18:51.350 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:18:51.350 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:18:51.350 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:18:51.350 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:18:51.350 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:18:51.350 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:18:51.350 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:18:51.350 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:18:51.608 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:18:51.608 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:18:51.608 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:18:51.608 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:18:51.608 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:18:51.608 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:18:51.608 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:18:51.608 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:18:51.608 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:18:51.608 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:18:51.608 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:18:51.608 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:18:51.608 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:18:51.608 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:18:51.608 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:18:51.608 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:18:51.608 [69/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:18:51.873 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:18:51.873 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:18:51.873 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:18:51.873 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:18:51.873 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:18:51.873 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:18:51.873 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:18:51.873 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:18:51.873 [78/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:18:51.873 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:18:51.873 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:18:51.873 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:18:51.873 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:18:51.873 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:18:52.165 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:18:52.165 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:18:52.165 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:18:52.165 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:18:52.165 [88/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:18:52.165 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:18:52.165 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:18:52.165 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:18:52.165 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:18:52.165 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:18:52.165 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:18:52.165 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:18:52.165 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:18:52.165 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:18:52.165 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:18:52.165 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:18:52.165 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:18:52.165 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:18:52.165 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:18:52.165 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:18:52.165 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:18:52.165 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:18:52.165 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:18:52.165 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:18:52.165 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:18:52.165 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:18:52.165 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:18:52.424 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:18:52.424 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:18:52.424 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:18:52.424 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:18:52.424 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:18:52.424 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:18:52.424 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:18:52.424 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:18:52.424 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:18:52.424 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:18:52.424 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:18:52.424 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:18:52.424 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:18:52.424 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:18:52.424 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:18:52.424 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:18:52.424 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:18:52.424 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:18:52.424 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:18:52.424 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:18:52.424 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:18:52.424 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:18:52.424 [133/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:18:52.682 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:18:52.682 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:18:52.682 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:18:52.682 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:18:52.682 [138/203] Linking target lib/libxnvme.so 00:18:52.682 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:18:52.682 [140/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:18:52.682 [141/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:18:52.682 [142/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:18:52.682 [143/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:18:52.682 [144/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:18:52.682 [145/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:18:52.682 [146/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:18:52.940 [147/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:18:52.940 [148/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:18:52.940 [149/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:18:52.940 [150/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:18:52.940 [151/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:18:52.940 [152/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:18:52.940 [153/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:18:52.940 [154/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:18:52.940 [155/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:18:52.940 [156/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:18:52.940 [157/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:18:52.940 [158/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:18:53.198 [159/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:18:53.198 [160/203] Compiling C object tools/lblk.p/lblk.c.o 00:18:53.198 [161/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:18:53.198 [162/203] Compiling C object tools/xdd.p/xdd.c.o 00:18:53.198 [163/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:18:53.198 [164/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:18:53.198 [165/203] Compiling C object tools/kvs.p/kvs.c.o 00:18:53.198 [166/203] Compiling C object tools/zoned.p/zoned.c.o 00:18:53.198 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:18:53.198 [168/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:18:53.198 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:18:53.198 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:18:53.455 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:18:53.455 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:18:53.455 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:18:53.455 [174/203] Linking static target lib/libxnvme.a 00:18:53.712 [175/203] Linking target tests/xnvme_tests_xnvme_cli 00:18:53.712 [176/203] Linking target tests/xnvme_tests_lblk 00:18:53.712 [177/203] Linking target tests/xnvme_tests_ioworker 00:18:53.712 [178/203] Linking target tests/xnvme_tests_async_intf 00:18:53.712 [179/203] Linking target tests/xnvme_tests_scc 00:18:53.712 [180/203] Linking target tests/xnvme_tests_xnvme_file 00:18:53.712 [181/203] Linking target tests/xnvme_tests_znd_explicit_open 00:18:53.712 [182/203] Linking target tests/xnvme_tests_cli 00:18:53.712 [183/203] Linking target tests/xnvme_tests_enum 00:18:53.712 [184/203] Linking target tests/xnvme_tests_znd_state 00:18:53.712 [185/203] Linking target tests/xnvme_tests_znd_append 00:18:53.713 [186/203] Linking target tests/xnvme_tests_buf 00:18:53.713 [187/203] Linking target tests/xnvme_tests_map 00:18:53.713 [188/203] Linking target tests/xnvme_tests_kvs 00:18:53.713 [189/203] Linking target tools/xnvme 00:18:53.713 [190/203] Linking target tools/xnvme_file 00:18:53.713 [191/203] Linking target tools/lblk 00:18:53.713 [192/203] Linking target examples/xnvme_hello 00:18:53.713 [193/203] Linking target tools/kvs 00:18:53.713 [194/203] Linking target tools/zoned 00:18:53.713 [195/203] Linking target examples/xnvme_dev 00:18:53.713 [196/203] Linking target examples/xnvme_enum 00:18:53.713 [197/203] Linking target examples/xnvme_single_sync 00:18:53.713 [198/203] Linking target examples/xnvme_io_async 00:18:53.713 [199/203] Linking target tests/xnvme_tests_znd_zrwa 00:18:53.713 [200/203] Linking target tools/xdd 00:18:53.713 [201/203] Linking target examples/zoned_io_sync 00:18:53.713 [202/203] Linking target examples/xnvme_single_async 00:18:53.713 [203/203] Linking target examples/zoned_io_async 00:18:53.713 INFO: autodetecting backend as ninja 00:18:53.713 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:18:53.713 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:18:58.996 The Meson build system 00:18:58.996 Version: 1.3.1 00:18:58.996 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:18:58.996 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:18:58.996 Build type: native build 00:18:58.996 Program cat found: YES (/usr/bin/cat) 00:18:58.996 Project name: DPDK 00:18:58.996 Project version: 24.03.0 00:18:58.996 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:18:58.996 C linker for the host machine: cc ld.bfd 2.39-16 00:18:58.996 Host machine cpu family: x86_64 00:18:58.996 Host machine cpu: x86_64 00:18:58.996 Message: ## Building in Developer Mode ## 00:18:58.996 Program pkg-config found: YES (/usr/bin/pkg-config) 00:18:58.996 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:18:58.996 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:18:58.996 Program python3 found: YES (/usr/bin/python3) 00:18:58.996 Program cat found: YES (/usr/bin/cat) 00:18:58.996 Compiler for C supports arguments -march=native: YES 00:18:58.996 Checking for size of "void *" : 8 00:18:58.996 Checking for size of "void *" : 8 (cached) 00:18:58.996 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:18:58.996 Library m found: YES 00:18:58.996 Library numa found: YES 00:18:58.996 Has header "numaif.h" : YES 00:18:58.996 Library fdt found: NO 00:18:58.996 Library execinfo found: NO 00:18:58.996 Has header "execinfo.h" : YES 00:18:58.996 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:18:58.996 Run-time dependency libarchive found: NO (tried pkgconfig) 00:18:58.996 Run-time dependency libbsd found: NO (tried pkgconfig) 00:18:58.996 Run-time dependency jansson found: NO (tried pkgconfig) 00:18:58.996 Run-time dependency openssl found: YES 3.0.9 00:18:58.996 Run-time dependency libpcap found: YES 1.10.4 00:18:58.996 Has header "pcap.h" with dependency libpcap: YES 00:18:58.996 Compiler for C supports arguments -Wcast-qual: YES 00:18:58.996 Compiler for C supports arguments -Wdeprecated: YES 00:18:58.996 Compiler for C supports arguments -Wformat: YES 00:18:58.996 Compiler for C supports arguments -Wformat-nonliteral: NO 00:18:58.996 Compiler for C supports arguments -Wformat-security: NO 00:18:58.996 Compiler for C supports arguments -Wmissing-declarations: YES 00:18:58.996 Compiler for C supports arguments -Wmissing-prototypes: YES 00:18:58.996 Compiler for C supports arguments -Wnested-externs: YES 00:18:58.996 Compiler for C supports arguments -Wold-style-definition: YES 00:18:58.996 Compiler for C supports arguments -Wpointer-arith: YES 00:18:58.996 Compiler for C supports arguments -Wsign-compare: YES 00:18:58.996 Compiler for C supports arguments -Wstrict-prototypes: YES 00:18:58.996 Compiler for C supports arguments -Wundef: YES 00:18:58.996 Compiler for C supports arguments -Wwrite-strings: YES 00:18:58.996 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:18:58.996 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:18:58.996 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:18:58.996 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:18:58.996 Program objdump found: YES (/usr/bin/objdump) 00:18:58.996 Compiler for C supports arguments -mavx512f: YES 00:18:58.996 Checking if "AVX512 checking" compiles: YES 00:18:58.996 Fetching value of define "__SSE4_2__" : 1 00:18:58.996 Fetching value of define "__AES__" : 1 00:18:58.996 Fetching value of define "__AVX__" : 1 00:18:58.996 Fetching value of define "__AVX2__" : 1 00:18:58.996 Fetching value of define "__AVX512BW__" : (undefined) 00:18:58.996 Fetching value of define "__AVX512CD__" : (undefined) 00:18:58.996 Fetching value of define "__AVX512DQ__" : (undefined) 00:18:58.996 Fetching value of define "__AVX512F__" : (undefined) 00:18:58.996 Fetching value of define "__AVX512VL__" : (undefined) 00:18:58.996 Fetching value of define "__PCLMUL__" : 1 00:18:58.996 Fetching value of define "__RDRND__" : 1 00:18:58.996 Fetching value of define "__RDSEED__" : 1 00:18:58.996 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:18:58.996 Fetching value of define "__znver1__" : (undefined) 00:18:58.996 Fetching value of define "__znver2__" : (undefined) 00:18:58.996 Fetching value of define "__znver3__" : (undefined) 00:18:58.996 Fetching value of define "__znver4__" : (undefined) 00:18:58.996 Library asan found: YES 00:18:58.996 Compiler for C supports arguments -Wno-format-truncation: YES 00:18:58.996 Message: lib/log: Defining dependency "log" 00:18:58.996 Message: lib/kvargs: Defining dependency "kvargs" 00:18:58.996 Message: lib/telemetry: Defining dependency "telemetry" 00:18:58.996 Library rt found: YES 00:18:58.996 Checking for function "getentropy" : NO 00:18:58.996 Message: lib/eal: Defining dependency "eal" 00:18:58.996 Message: lib/ring: Defining dependency "ring" 00:18:58.996 Message: lib/rcu: Defining dependency "rcu" 00:18:58.996 Message: lib/mempool: Defining dependency "mempool" 00:18:58.996 Message: lib/mbuf: Defining dependency "mbuf" 00:18:58.996 Fetching value of define "__PCLMUL__" : 1 (cached) 00:18:58.996 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:18:58.996 Compiler for C supports arguments -mpclmul: YES 00:18:58.996 Compiler for C supports arguments -maes: YES 00:18:58.996 Compiler for C supports arguments -mavx512f: YES (cached) 00:18:58.996 Compiler for C supports arguments -mavx512bw: YES 00:18:58.997 Compiler for C supports arguments -mavx512dq: YES 00:18:58.997 Compiler for C supports arguments -mavx512vl: YES 00:18:58.997 Compiler for C supports arguments -mvpclmulqdq: YES 00:18:58.997 Compiler for C supports arguments -mavx2: YES 00:18:58.997 Compiler for C supports arguments -mavx: YES 00:18:58.997 Message: lib/net: Defining dependency "net" 00:18:58.997 Message: lib/meter: Defining dependency "meter" 00:18:58.997 Message: lib/ethdev: Defining dependency "ethdev" 00:18:58.997 Message: lib/pci: Defining dependency "pci" 00:18:58.997 Message: lib/cmdline: Defining dependency "cmdline" 00:18:58.997 Message: lib/hash: Defining dependency "hash" 00:18:58.997 Message: lib/timer: Defining dependency "timer" 00:18:58.997 Message: lib/compressdev: Defining dependency "compressdev" 00:18:58.997 Message: lib/cryptodev: Defining dependency "cryptodev" 00:18:58.997 Message: lib/dmadev: Defining dependency "dmadev" 00:18:58.997 Compiler for C supports arguments -Wno-cast-qual: YES 00:18:58.997 Message: lib/power: Defining dependency "power" 00:18:58.997 Message: lib/reorder: Defining dependency "reorder" 00:18:58.997 Message: lib/security: Defining dependency "security" 00:18:58.997 Has header "linux/userfaultfd.h" : YES 00:18:58.997 Has header "linux/vduse.h" : YES 00:18:58.997 Message: lib/vhost: Defining dependency "vhost" 00:18:58.997 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:18:58.997 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:18:58.997 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:18:58.997 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:18:58.997 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:18:58.997 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:18:58.997 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:18:58.997 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:18:58.997 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:18:58.997 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:18:58.997 Program doxygen found: YES (/usr/bin/doxygen) 00:18:58.997 Configuring doxy-api-html.conf using configuration 00:18:58.997 Configuring doxy-api-man.conf using configuration 00:18:58.997 Program mandb found: YES (/usr/bin/mandb) 00:18:58.997 Program sphinx-build found: NO 00:18:58.997 Configuring rte_build_config.h using configuration 00:18:58.997 Message: 00:18:58.997 ================= 00:18:58.997 Applications Enabled 00:18:58.997 ================= 00:18:58.997 00:18:58.997 apps: 00:18:58.997 00:18:58.997 00:18:58.997 Message: 00:18:58.997 ================= 00:18:58.997 Libraries Enabled 00:18:58.997 ================= 00:18:58.997 00:18:58.997 libs: 00:18:58.997 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:18:58.997 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:18:58.997 cryptodev, dmadev, power, reorder, security, vhost, 00:18:58.997 00:18:58.997 Message: 00:18:58.997 =============== 00:18:58.997 Drivers Enabled 00:18:58.997 =============== 00:18:58.997 00:18:58.997 common: 00:18:58.997 00:18:58.997 bus: 00:18:58.997 pci, vdev, 00:18:58.997 mempool: 00:18:58.997 ring, 00:18:58.997 dma: 00:18:58.997 00:18:58.997 net: 00:18:58.997 00:18:58.997 crypto: 00:18:58.997 00:18:58.997 compress: 00:18:58.997 00:18:58.997 vdpa: 00:18:58.997 00:18:58.997 00:18:58.997 Message: 00:18:58.997 ================= 00:18:58.997 Content Skipped 00:18:58.997 ================= 00:18:58.997 00:18:58.997 apps: 00:18:58.997 dumpcap: explicitly disabled via build config 00:18:58.997 graph: explicitly disabled via build config 00:18:58.997 pdump: explicitly disabled via build config 00:18:58.997 proc-info: explicitly disabled via build config 00:18:58.997 test-acl: explicitly disabled via build config 00:18:58.997 test-bbdev: explicitly disabled via build config 00:18:58.997 test-cmdline: explicitly disabled via build config 00:18:58.997 test-compress-perf: explicitly disabled via build config 00:18:58.997 test-crypto-perf: explicitly disabled via build config 00:18:58.997 test-dma-perf: explicitly disabled via build config 00:18:58.997 test-eventdev: explicitly disabled via build config 00:18:58.997 test-fib: explicitly disabled via build config 00:18:58.997 test-flow-perf: explicitly disabled via build config 00:18:58.997 test-gpudev: explicitly disabled via build config 00:18:58.997 test-mldev: explicitly disabled via build config 00:18:58.997 test-pipeline: explicitly disabled via build config 00:18:58.997 test-pmd: explicitly disabled via build config 00:18:58.997 test-regex: explicitly disabled via build config 00:18:58.997 test-sad: explicitly disabled via build config 00:18:58.997 test-security-perf: explicitly disabled via build config 00:18:58.997 00:18:58.997 libs: 00:18:58.997 argparse: explicitly disabled via build config 00:18:58.997 metrics: explicitly disabled via build config 00:18:58.997 acl: explicitly disabled via build config 00:18:58.997 bbdev: explicitly disabled via build config 00:18:58.997 bitratestats: explicitly disabled via build config 00:18:58.997 bpf: explicitly disabled via build config 00:18:58.997 cfgfile: explicitly disabled via build config 00:18:58.997 distributor: explicitly disabled via build config 00:18:58.997 efd: explicitly disabled via build config 00:18:58.997 eventdev: explicitly disabled via build config 00:18:58.997 dispatcher: explicitly disabled via build config 00:18:58.997 gpudev: explicitly disabled via build config 00:18:58.997 gro: explicitly disabled via build config 00:18:58.997 gso: explicitly disabled via build config 00:18:58.997 ip_frag: explicitly disabled via build config 00:18:58.997 jobstats: explicitly disabled via build config 00:18:58.997 latencystats: explicitly disabled via build config 00:18:58.997 lpm: explicitly disabled via build config 00:18:58.997 member: explicitly disabled via build config 00:18:58.997 pcapng: explicitly disabled via build config 00:18:58.997 rawdev: explicitly disabled via build config 00:18:58.997 regexdev: explicitly disabled via build config 00:18:58.997 mldev: explicitly disabled via build config 00:18:58.997 rib: explicitly disabled via build config 00:18:58.997 sched: explicitly disabled via build config 00:18:58.997 stack: explicitly disabled via build config 00:18:58.997 ipsec: explicitly disabled via build config 00:18:58.997 pdcp: explicitly disabled via build config 00:18:58.997 fib: explicitly disabled via build config 00:18:58.997 port: explicitly disabled via build config 00:18:58.997 pdump: explicitly disabled via build config 00:18:58.997 table: explicitly disabled via build config 00:18:58.997 pipeline: explicitly disabled via build config 00:18:58.997 graph: explicitly disabled via build config 00:18:58.997 node: explicitly disabled via build config 00:18:58.997 00:18:58.997 drivers: 00:18:58.997 common/cpt: not in enabled drivers build config 00:18:58.997 common/dpaax: not in enabled drivers build config 00:18:58.997 common/iavf: not in enabled drivers build config 00:18:58.997 common/idpf: not in enabled drivers build config 00:18:58.997 common/ionic: not in enabled drivers build config 00:18:58.997 common/mvep: not in enabled drivers build config 00:18:58.997 common/octeontx: not in enabled drivers build config 00:18:58.997 bus/auxiliary: not in enabled drivers build config 00:18:58.997 bus/cdx: not in enabled drivers build config 00:18:58.997 bus/dpaa: not in enabled drivers build config 00:18:58.997 bus/fslmc: not in enabled drivers build config 00:18:58.997 bus/ifpga: not in enabled drivers build config 00:18:58.997 bus/platform: not in enabled drivers build config 00:18:58.997 bus/uacce: not in enabled drivers build config 00:18:58.997 bus/vmbus: not in enabled drivers build config 00:18:58.997 common/cnxk: not in enabled drivers build config 00:18:58.997 common/mlx5: not in enabled drivers build config 00:18:58.997 common/nfp: not in enabled drivers build config 00:18:58.997 common/nitrox: not in enabled drivers build config 00:18:58.997 common/qat: not in enabled drivers build config 00:18:58.997 common/sfc_efx: not in enabled drivers build config 00:18:58.997 mempool/bucket: not in enabled drivers build config 00:18:58.997 mempool/cnxk: not in enabled drivers build config 00:18:58.997 mempool/dpaa: not in enabled drivers build config 00:18:58.997 mempool/dpaa2: not in enabled drivers build config 00:18:58.997 mempool/octeontx: not in enabled drivers build config 00:18:58.997 mempool/stack: not in enabled drivers build config 00:18:58.997 dma/cnxk: not in enabled drivers build config 00:18:58.997 dma/dpaa: not in enabled drivers build config 00:18:58.997 dma/dpaa2: not in enabled drivers build config 00:18:58.997 dma/hisilicon: not in enabled drivers build config 00:18:58.997 dma/idxd: not in enabled drivers build config 00:18:58.997 dma/ioat: not in enabled drivers build config 00:18:58.997 dma/skeleton: not in enabled drivers build config 00:18:58.997 net/af_packet: not in enabled drivers build config 00:18:58.997 net/af_xdp: not in enabled drivers build config 00:18:58.997 net/ark: not in enabled drivers build config 00:18:58.997 net/atlantic: not in enabled drivers build config 00:18:58.997 net/avp: not in enabled drivers build config 00:18:58.997 net/axgbe: not in enabled drivers build config 00:18:58.997 net/bnx2x: not in enabled drivers build config 00:18:58.997 net/bnxt: not in enabled drivers build config 00:18:58.997 net/bonding: not in enabled drivers build config 00:18:58.997 net/cnxk: not in enabled drivers build config 00:18:58.997 net/cpfl: not in enabled drivers build config 00:18:58.997 net/cxgbe: not in enabled drivers build config 00:18:58.997 net/dpaa: not in enabled drivers build config 00:18:58.997 net/dpaa2: not in enabled drivers build config 00:18:58.997 net/e1000: not in enabled drivers build config 00:18:58.997 net/ena: not in enabled drivers build config 00:18:58.997 net/enetc: not in enabled drivers build config 00:18:58.997 net/enetfec: not in enabled drivers build config 00:18:58.997 net/enic: not in enabled drivers build config 00:18:58.997 net/failsafe: not in enabled drivers build config 00:18:58.997 net/fm10k: not in enabled drivers build config 00:18:58.997 net/gve: not in enabled drivers build config 00:18:58.997 net/hinic: not in enabled drivers build config 00:18:58.997 net/hns3: not in enabled drivers build config 00:18:58.997 net/i40e: not in enabled drivers build config 00:18:58.997 net/iavf: not in enabled drivers build config 00:18:58.997 net/ice: not in enabled drivers build config 00:18:58.997 net/idpf: not in enabled drivers build config 00:18:58.997 net/igc: not in enabled drivers build config 00:18:58.997 net/ionic: not in enabled drivers build config 00:18:58.997 net/ipn3ke: not in enabled drivers build config 00:18:58.997 net/ixgbe: not in enabled drivers build config 00:18:58.997 net/mana: not in enabled drivers build config 00:18:58.997 net/memif: not in enabled drivers build config 00:18:58.997 net/mlx4: not in enabled drivers build config 00:18:58.997 net/mlx5: not in enabled drivers build config 00:18:58.997 net/mvneta: not in enabled drivers build config 00:18:58.997 net/mvpp2: not in enabled drivers build config 00:18:58.997 net/netvsc: not in enabled drivers build config 00:18:58.997 net/nfb: not in enabled drivers build config 00:18:58.997 net/nfp: not in enabled drivers build config 00:18:58.997 net/ngbe: not in enabled drivers build config 00:18:58.997 net/null: not in enabled drivers build config 00:18:58.997 net/octeontx: not in enabled drivers build config 00:18:58.997 net/octeon_ep: not in enabled drivers build config 00:18:58.997 net/pcap: not in enabled drivers build config 00:18:58.997 net/pfe: not in enabled drivers build config 00:18:58.997 net/qede: not in enabled drivers build config 00:18:58.997 net/ring: not in enabled drivers build config 00:18:58.997 net/sfc: not in enabled drivers build config 00:18:58.997 net/softnic: not in enabled drivers build config 00:18:58.997 net/tap: not in enabled drivers build config 00:18:58.997 net/thunderx: not in enabled drivers build config 00:18:58.997 net/txgbe: not in enabled drivers build config 00:18:58.997 net/vdev_netvsc: not in enabled drivers build config 00:18:58.997 net/vhost: not in enabled drivers build config 00:18:58.997 net/virtio: not in enabled drivers build config 00:18:58.997 net/vmxnet3: not in enabled drivers build config 00:18:58.997 raw/*: missing internal dependency, "rawdev" 00:18:58.997 crypto/armv8: not in enabled drivers build config 00:18:58.997 crypto/bcmfs: not in enabled drivers build config 00:18:58.997 crypto/caam_jr: not in enabled drivers build config 00:18:58.997 crypto/ccp: not in enabled drivers build config 00:18:58.997 crypto/cnxk: not in enabled drivers build config 00:18:58.997 crypto/dpaa_sec: not in enabled drivers build config 00:18:58.997 crypto/dpaa2_sec: not in enabled drivers build config 00:18:58.997 crypto/ipsec_mb: not in enabled drivers build config 00:18:58.997 crypto/mlx5: not in enabled drivers build config 00:18:58.997 crypto/mvsam: not in enabled drivers build config 00:18:58.997 crypto/nitrox: not in enabled drivers build config 00:18:58.997 crypto/null: not in enabled drivers build config 00:18:58.997 crypto/octeontx: not in enabled drivers build config 00:18:58.997 crypto/openssl: not in enabled drivers build config 00:18:58.997 crypto/scheduler: not in enabled drivers build config 00:18:58.997 crypto/uadk: not in enabled drivers build config 00:18:58.997 crypto/virtio: not in enabled drivers build config 00:18:58.997 compress/isal: not in enabled drivers build config 00:18:58.997 compress/mlx5: not in enabled drivers build config 00:18:58.997 compress/nitrox: not in enabled drivers build config 00:18:58.997 compress/octeontx: not in enabled drivers build config 00:18:58.997 compress/zlib: not in enabled drivers build config 00:18:58.997 regex/*: missing internal dependency, "regexdev" 00:18:58.997 ml/*: missing internal dependency, "mldev" 00:18:58.997 vdpa/ifc: not in enabled drivers build config 00:18:58.997 vdpa/mlx5: not in enabled drivers build config 00:18:58.997 vdpa/nfp: not in enabled drivers build config 00:18:58.997 vdpa/sfc: not in enabled drivers build config 00:18:58.997 event/*: missing internal dependency, "eventdev" 00:18:58.997 baseband/*: missing internal dependency, "bbdev" 00:18:58.997 gpu/*: missing internal dependency, "gpudev" 00:18:58.997 00:18:58.997 00:18:59.255 Build targets in project: 85 00:18:59.255 00:18:59.255 DPDK 24.03.0 00:18:59.255 00:18:59.255 User defined options 00:18:59.255 buildtype : debug 00:18:59.255 default_library : shared 00:18:59.255 libdir : lib 00:18:59.255 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:18:59.255 b_sanitize : address 00:18:59.255 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:18:59.255 c_link_args : 00:18:59.255 cpu_instruction_set: native 00:18:59.255 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:18:59.255 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:18:59.255 enable_docs : false 00:18:59.255 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:18:59.255 enable_kmods : false 00:18:59.255 max_lcores : 128 00:18:59.255 tests : false 00:18:59.255 00:18:59.255 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:18:59.819 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:18:59.819 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:18:59.819 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:18:59.819 [3/268] Linking static target lib/librte_kvargs.a 00:19:00.077 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:19:00.077 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:19:00.077 [6/268] Linking static target lib/librte_log.a 00:19:00.335 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:19:00.593 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:19:00.593 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:19:00.593 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:19:00.593 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:19:00.850 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:19:00.850 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:19:00.850 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:19:00.850 [15/268] Linking static target lib/librte_telemetry.a 00:19:00.850 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:19:00.850 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:19:01.107 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:19:01.107 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:19:01.107 [20/268] Linking target lib/librte_log.so.24.1 00:19:01.365 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:19:01.365 [22/268] Linking target lib/librte_kvargs.so.24.1 00:19:01.622 [23/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:19:01.622 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:19:01.879 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:19:01.879 [26/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:19:01.879 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:19:01.879 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:19:01.879 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:19:01.879 [30/268] Linking target lib/librte_telemetry.so.24.1 00:19:01.879 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:19:01.879 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:19:02.137 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:19:02.137 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:19:02.137 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:19:02.394 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:19:02.394 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:19:02.652 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:19:02.652 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:19:02.910 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:19:02.910 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:19:02.910 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:19:02.910 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:19:02.910 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:19:03.167 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:19:03.167 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:19:03.424 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:19:03.424 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:19:03.424 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:19:03.681 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:19:03.681 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:19:03.939 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:19:03.939 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:19:04.197 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:19:04.197 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:19:04.197 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:19:04.197 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:19:04.197 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:19:04.455 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:19:04.455 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:19:04.455 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:19:04.714 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:19:04.714 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:19:04.972 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:19:04.972 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:19:05.230 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:19:05.230 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:19:05.230 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:19:05.488 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:19:05.489 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:19:05.489 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:19:05.489 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:19:05.489 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:19:05.796 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:19:05.796 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:19:06.053 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:19:06.312 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:19:06.312 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:19:06.312 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:19:06.312 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:19:06.571 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:19:06.571 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:19:06.571 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:19:06.829 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:19:06.829 [85/268] Linking static target lib/librte_ring.a 00:19:06.829 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:19:06.829 [87/268] Linking static target lib/librte_eal.a 00:19:07.396 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:19:07.396 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:19:07.396 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:19:07.396 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:19:07.396 [92/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:19:07.654 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:19:07.654 [94/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:19:07.654 [95/268] Linking static target lib/librte_rcu.a 00:19:07.654 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:19:07.654 [97/268] Linking static target lib/librte_mempool.a 00:19:07.911 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:19:07.911 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:19:08.169 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:19:08.169 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:19:08.428 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:19:08.428 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:19:08.428 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:19:08.687 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:19:08.687 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:19:08.687 [107/268] Linking static target lib/librte_mbuf.a 00:19:08.946 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:19:08.946 [109/268] Linking static target lib/librte_meter.a 00:19:08.946 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:19:08.946 [111/268] Linking static target lib/librte_net.a 00:19:09.206 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:19:09.464 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:19:09.464 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:19:09.722 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:19:09.722 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:19:09.722 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:19:09.980 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:19:10.238 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:19:10.804 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:19:10.804 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:19:10.804 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:19:11.062 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:19:11.320 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:19:11.320 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:19:11.320 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:19:11.320 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:19:11.320 [128/268] Linking static target lib/librte_pci.a 00:19:11.578 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:19:11.578 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:19:11.578 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:19:11.578 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:19:11.578 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:19:11.836 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:19:11.836 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:19:11.836 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:19:11.836 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:19:11.836 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:19:11.836 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:19:11.836 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:19:11.836 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:19:11.836 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:19:11.836 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:19:12.094 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:19:12.094 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:19:12.659 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:19:12.659 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:19:12.659 [148/268] Linking static target lib/librte_cmdline.a 00:19:12.659 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:19:12.659 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:19:12.916 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:19:12.916 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:19:12.916 [153/268] Linking static target lib/librte_ethdev.a 00:19:13.173 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:19:13.173 [155/268] Linking static target lib/librte_timer.a 00:19:13.173 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:19:13.173 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:19:13.430 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:19:13.430 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:19:13.688 [160/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:19:13.688 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:19:13.946 [162/268] Linking static target lib/librte_compressdev.a 00:19:13.946 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:19:14.204 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:19:14.204 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:19:14.204 [166/268] Linking static target lib/librte_hash.a 00:19:14.204 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:19:14.204 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:19:14.204 [169/268] Linking static target lib/librte_dmadev.a 00:19:14.204 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:19:14.463 [171/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:19:14.463 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:19:14.736 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:19:14.736 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:19:15.000 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:19:15.258 [176/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:19:15.258 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:19:15.258 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:19:15.258 [179/268] Linking static target lib/librte_cryptodev.a 00:19:15.258 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:19:15.258 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:19:15.515 [182/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:19:15.515 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:19:15.515 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:19:15.772 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:19:15.772 [186/268] Linking static target lib/librte_power.a 00:19:16.095 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:19:16.376 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:19:16.376 [189/268] Linking static target lib/librte_reorder.a 00:19:16.376 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:19:16.376 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:19:16.376 [192/268] Linking static target lib/librte_security.a 00:19:16.376 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:19:16.631 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:19:16.888 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:19:17.144 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:19:17.144 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:19:17.402 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:19:17.402 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:19:17.660 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:19:17.660 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:19:17.917 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:19:17.917 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:19:17.917 [204/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:19:17.917 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:19:18.175 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:19:18.433 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:19:18.433 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:19:18.433 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:19:18.433 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:19:18.433 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:19:18.692 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:19:18.692 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:19:18.692 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:19:18.692 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:19:18.692 [216/268] Linking static target drivers/librte_bus_vdev.a 00:19:18.692 [217/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:19:18.692 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:19:18.692 [219/268] Linking static target drivers/librte_bus_pci.a 00:19:18.951 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:19:18.951 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:19:18.951 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:19:18.951 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:19:19.210 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:19:19.210 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:19:19.210 [226/268] Linking static target drivers/librte_mempool_ring.a 00:19:19.210 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:19:19.471 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:19:19.471 [229/268] Linking target lib/librte_eal.so.24.1 00:19:19.731 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:19:19.732 [231/268] Linking target lib/librte_timer.so.24.1 00:19:19.732 [232/268] Linking target lib/librte_pci.so.24.1 00:19:19.732 [233/268] Linking target lib/librte_meter.so.24.1 00:19:19.732 [234/268] Linking target lib/librte_dmadev.so.24.1 00:19:19.732 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:19:19.732 [236/268] Linking target lib/librte_ring.so.24.1 00:19:19.732 [237/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:19:19.990 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:19:19.990 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:19:19.990 [240/268] Linking target drivers/librte_bus_pci.so.24.1 00:19:19.990 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:19:19.990 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:19:19.990 [243/268] Linking target lib/librte_rcu.so.24.1 00:19:19.990 [244/268] Linking target lib/librte_mempool.so.24.1 00:19:20.248 [245/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:19:20.248 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:19:20.248 [247/268] Linking target drivers/librte_mempool_ring.so.24.1 00:19:20.248 [248/268] Linking target lib/librte_mbuf.so.24.1 00:19:20.248 [249/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:19:20.506 [250/268] Linking target lib/librte_compressdev.so.24.1 00:19:20.506 [251/268] Linking target lib/librte_reorder.so.24.1 00:19:20.506 [252/268] Linking target lib/librte_net.so.24.1 00:19:20.506 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:19:20.506 [254/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:19:20.506 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:19:20.764 [256/268] Linking target lib/librte_cmdline.so.24.1 00:19:20.764 [257/268] Linking target lib/librte_hash.so.24.1 00:19:20.764 [258/268] Linking target lib/librte_security.so.24.1 00:19:20.764 [259/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:19:20.764 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:19:21.699 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:19:21.957 [262/268] Linking target lib/librte_ethdev.so.24.1 00:19:21.957 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:19:22.215 [264/268] Linking target lib/librte_power.so.24.1 00:19:24.743 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:19:24.743 [266/268] Linking static target lib/librte_vhost.a 00:19:26.643 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:19:26.643 [268/268] Linking target lib/librte_vhost.so.24.1 00:19:26.643 INFO: autodetecting backend as ninja 00:19:26.643 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:19:28.017 CC lib/ut/ut.o 00:19:28.017 CC lib/ut_mock/mock.o 00:19:28.017 CC lib/log/log.o 00:19:28.017 CC lib/log/log_flags.o 00:19:28.017 CC lib/log/log_deprecated.o 00:19:28.017 LIB libspdk_ut_mock.a 00:19:28.017 LIB libspdk_ut.a 00:19:28.017 SO libspdk_ut_mock.so.6.0 00:19:28.017 SO libspdk_ut.so.2.0 00:19:28.017 LIB libspdk_log.a 00:19:28.275 SO libspdk_log.so.7.0 00:19:28.275 SYMLINK libspdk_ut_mock.so 00:19:28.275 SYMLINK libspdk_ut.so 00:19:28.275 SYMLINK libspdk_log.so 00:19:28.534 CC lib/util/base64.o 00:19:28.534 CC lib/util/bit_array.o 00:19:28.534 CC lib/ioat/ioat.o 00:19:28.534 CC lib/util/cpuset.o 00:19:28.534 CC lib/util/crc32.o 00:19:28.534 CC lib/util/crc32c.o 00:19:28.534 CC lib/util/crc16.o 00:19:28.534 CXX lib/trace_parser/trace.o 00:19:28.534 CC lib/dma/dma.o 00:19:28.534 CC lib/vfio_user/host/vfio_user_pci.o 00:19:28.534 CC lib/util/crc32_ieee.o 00:19:28.534 CC lib/vfio_user/host/vfio_user.o 00:19:28.534 CC lib/util/crc64.o 00:19:28.791 LIB libspdk_dma.a 00:19:28.791 CC lib/util/dif.o 00:19:28.791 SO libspdk_dma.so.4.0 00:19:28.791 CC lib/util/fd.o 00:19:28.791 CC lib/util/file.o 00:19:28.791 CC lib/util/hexlify.o 00:19:28.791 SYMLINK libspdk_dma.so 00:19:28.791 CC lib/util/iov.o 00:19:28.791 CC lib/util/math.o 00:19:28.791 LIB libspdk_ioat.a 00:19:28.791 CC lib/util/pipe.o 00:19:28.791 SO libspdk_ioat.so.7.0 00:19:28.791 LIB libspdk_vfio_user.a 00:19:28.791 CC lib/util/strerror_tls.o 00:19:29.049 SYMLINK libspdk_ioat.so 00:19:29.049 CC lib/util/string.o 00:19:29.049 SO libspdk_vfio_user.so.5.0 00:19:29.049 CC lib/util/uuid.o 00:19:29.049 CC lib/util/fd_group.o 00:19:29.049 CC lib/util/xor.o 00:19:29.049 SYMLINK libspdk_vfio_user.so 00:19:29.049 CC lib/util/zipf.o 00:19:29.306 LIB libspdk_util.a 00:19:29.563 SO libspdk_util.so.9.1 00:19:29.820 LIB libspdk_trace_parser.a 00:19:29.820 SYMLINK libspdk_util.so 00:19:29.820 SO libspdk_trace_parser.so.5.0 00:19:29.820 SYMLINK libspdk_trace_parser.so 00:19:29.820 CC lib/vmd/vmd.o 00:19:29.820 CC lib/vmd/led.o 00:19:29.820 CC lib/rdma_utils/rdma_utils.o 00:19:29.820 CC lib/json/json_parse.o 00:19:29.820 CC lib/json/json_util.o 00:19:29.820 CC lib/conf/conf.o 00:19:29.820 CC lib/rdma_provider/common.o 00:19:29.820 CC lib/json/json_write.o 00:19:29.820 CC lib/idxd/idxd.o 00:19:29.820 CC lib/env_dpdk/env.o 00:19:30.078 CC lib/env_dpdk/memory.o 00:19:30.078 LIB libspdk_conf.a 00:19:30.078 CC lib/rdma_provider/rdma_provider_verbs.o 00:19:30.336 CC lib/env_dpdk/pci.o 00:19:30.336 SO libspdk_conf.so.6.0 00:19:30.336 LIB libspdk_rdma_utils.a 00:19:30.336 CC lib/env_dpdk/init.o 00:19:30.336 SO libspdk_rdma_utils.so.1.0 00:19:30.336 LIB libspdk_json.a 00:19:30.336 SYMLINK libspdk_conf.so 00:19:30.336 CC lib/env_dpdk/threads.o 00:19:30.336 SO libspdk_json.so.6.0 00:19:30.336 SYMLINK libspdk_rdma_utils.so 00:19:30.336 CC lib/env_dpdk/pci_ioat.o 00:19:30.336 LIB libspdk_rdma_provider.a 00:19:30.336 SYMLINK libspdk_json.so 00:19:30.594 CC lib/env_dpdk/pci_virtio.o 00:19:30.594 SO libspdk_rdma_provider.so.6.0 00:19:30.594 SYMLINK libspdk_rdma_provider.so 00:19:30.594 CC lib/env_dpdk/pci_vmd.o 00:19:30.594 CC lib/env_dpdk/pci_idxd.o 00:19:30.594 CC lib/env_dpdk/pci_event.o 00:19:30.594 CC lib/jsonrpc/jsonrpc_server.o 00:19:30.594 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:19:30.594 CC lib/idxd/idxd_user.o 00:19:30.594 CC lib/idxd/idxd_kernel.o 00:19:30.594 CC lib/jsonrpc/jsonrpc_client.o 00:19:30.851 CC lib/env_dpdk/sigbus_handler.o 00:19:30.851 CC lib/env_dpdk/pci_dpdk.o 00:19:30.851 LIB libspdk_vmd.a 00:19:30.851 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:19:30.851 SO libspdk_vmd.so.6.0 00:19:30.851 CC lib/env_dpdk/pci_dpdk_2207.o 00:19:30.851 CC lib/env_dpdk/pci_dpdk_2211.o 00:19:30.851 SYMLINK libspdk_vmd.so 00:19:31.109 LIB libspdk_idxd.a 00:19:31.109 SO libspdk_idxd.so.12.0 00:19:31.109 LIB libspdk_jsonrpc.a 00:19:31.109 SYMLINK libspdk_idxd.so 00:19:31.109 SO libspdk_jsonrpc.so.6.0 00:19:31.367 SYMLINK libspdk_jsonrpc.so 00:19:31.628 CC lib/rpc/rpc.o 00:19:31.885 LIB libspdk_rpc.a 00:19:31.885 SO libspdk_rpc.so.6.0 00:19:31.885 LIB libspdk_env_dpdk.a 00:19:31.885 SYMLINK libspdk_rpc.so 00:19:32.143 SO libspdk_env_dpdk.so.14.1 00:19:32.143 CC lib/notify/notify.o 00:19:32.143 CC lib/trace/trace.o 00:19:32.143 CC lib/notify/notify_rpc.o 00:19:32.143 CC lib/trace/trace_rpc.o 00:19:32.143 CC lib/trace/trace_flags.o 00:19:32.143 CC lib/keyring/keyring.o 00:19:32.143 CC lib/keyring/keyring_rpc.o 00:19:32.143 SYMLINK libspdk_env_dpdk.so 00:19:32.401 LIB libspdk_notify.a 00:19:32.401 SO libspdk_notify.so.6.0 00:19:32.401 LIB libspdk_trace.a 00:19:32.401 LIB libspdk_keyring.a 00:19:32.658 SO libspdk_trace.so.10.0 00:19:32.658 SO libspdk_keyring.so.1.0 00:19:32.658 SYMLINK libspdk_notify.so 00:19:32.658 SYMLINK libspdk_keyring.so 00:19:32.658 SYMLINK libspdk_trace.so 00:19:32.917 CC lib/thread/iobuf.o 00:19:32.917 CC lib/thread/thread.o 00:19:32.917 CC lib/sock/sock.o 00:19:32.917 CC lib/sock/sock_rpc.o 00:19:33.483 LIB libspdk_sock.a 00:19:33.483 SO libspdk_sock.so.10.0 00:19:33.483 SYMLINK libspdk_sock.so 00:19:33.741 CC lib/nvme/nvme_ctrlr_cmd.o 00:19:33.741 CC lib/nvme/nvme_ctrlr.o 00:19:33.741 CC lib/nvme/nvme_fabric.o 00:19:33.741 CC lib/nvme/nvme_ns_cmd.o 00:19:33.741 CC lib/nvme/nvme_ns.o 00:19:33.741 CC lib/nvme/nvme_pcie_common.o 00:19:33.741 CC lib/nvme/nvme_pcie.o 00:19:33.741 CC lib/nvme/nvme.o 00:19:33.741 CC lib/nvme/nvme_qpair.o 00:19:34.674 CC lib/nvme/nvme_quirks.o 00:19:34.674 CC lib/nvme/nvme_transport.o 00:19:34.931 CC lib/nvme/nvme_discovery.o 00:19:34.931 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:19:34.931 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:19:34.931 CC lib/nvme/nvme_tcp.o 00:19:35.187 CC lib/nvme/nvme_opal.o 00:19:35.187 LIB libspdk_thread.a 00:19:35.187 SO libspdk_thread.so.10.1 00:19:35.187 CC lib/nvme/nvme_io_msg.o 00:19:35.187 SYMLINK libspdk_thread.so 00:19:35.444 CC lib/accel/accel.o 00:19:35.444 CC lib/blob/blobstore.o 00:19:35.444 CC lib/nvme/nvme_poll_group.o 00:19:35.444 CC lib/blob/request.o 00:19:35.702 CC lib/blob/zeroes.o 00:19:35.702 CC lib/blob/blob_bs_dev.o 00:19:35.702 CC lib/nvme/nvme_zns.o 00:19:35.702 CC lib/nvme/nvme_stubs.o 00:19:35.959 CC lib/nvme/nvme_auth.o 00:19:35.959 CC lib/nvme/nvme_cuse.o 00:19:35.959 CC lib/accel/accel_rpc.o 00:19:36.216 CC lib/nvme/nvme_rdma.o 00:19:36.216 CC lib/accel/accel_sw.o 00:19:36.780 CC lib/init/subsystem.o 00:19:36.780 CC lib/init/json_config.o 00:19:36.780 CC lib/virtio/virtio.o 00:19:36.780 LIB libspdk_accel.a 00:19:36.780 SO libspdk_accel.so.15.1 00:19:36.780 CC lib/init/subsystem_rpc.o 00:19:36.780 CC lib/virtio/virtio_vhost_user.o 00:19:36.780 SYMLINK libspdk_accel.so 00:19:36.780 CC lib/virtio/virtio_vfio_user.o 00:19:37.038 CC lib/virtio/virtio_pci.o 00:19:37.038 CC lib/init/rpc.o 00:19:37.296 LIB libspdk_init.a 00:19:37.296 CC lib/bdev/bdev.o 00:19:37.296 CC lib/bdev/bdev_rpc.o 00:19:37.296 CC lib/bdev/part.o 00:19:37.296 CC lib/bdev/scsi_nvme.o 00:19:37.296 CC lib/bdev/bdev_zone.o 00:19:37.296 SO libspdk_init.so.5.0 00:19:37.296 SYMLINK libspdk_init.so 00:19:37.296 LIB libspdk_virtio.a 00:19:37.296 SO libspdk_virtio.so.7.0 00:19:37.554 SYMLINK libspdk_virtio.so 00:19:37.554 CC lib/event/log_rpc.o 00:19:37.554 CC lib/event/app.o 00:19:37.554 CC lib/event/reactor.o 00:19:37.554 CC lib/event/app_rpc.o 00:19:37.554 CC lib/event/scheduler_static.o 00:19:37.812 LIB libspdk_nvme.a 00:19:38.069 SO libspdk_nvme.so.13.1 00:19:38.069 LIB libspdk_event.a 00:19:38.327 SO libspdk_event.so.14.0 00:19:38.327 SYMLINK libspdk_event.so 00:19:38.327 SYMLINK libspdk_nvme.so 00:19:39.701 LIB libspdk_blob.a 00:19:39.958 SO libspdk_blob.so.11.0 00:19:39.958 SYMLINK libspdk_blob.so 00:19:40.216 CC lib/lvol/lvol.o 00:19:40.216 CC lib/blobfs/blobfs.o 00:19:40.216 CC lib/blobfs/tree.o 00:19:40.781 LIB libspdk_bdev.a 00:19:40.781 SO libspdk_bdev.so.15.1 00:19:41.038 SYMLINK libspdk_bdev.so 00:19:41.296 CC lib/ftl/ftl_core.o 00:19:41.296 CC lib/ftl/ftl_init.o 00:19:41.296 CC lib/ftl/ftl_layout.o 00:19:41.296 CC lib/ftl/ftl_debug.o 00:19:41.296 CC lib/nvmf/ctrlr.o 00:19:41.296 CC lib/ublk/ublk.o 00:19:41.296 CC lib/nbd/nbd.o 00:19:41.296 CC lib/scsi/dev.o 00:19:41.296 LIB libspdk_blobfs.a 00:19:41.296 SO libspdk_blobfs.so.10.0 00:19:41.554 LIB libspdk_lvol.a 00:19:41.554 SO libspdk_lvol.so.10.0 00:19:41.554 SYMLINK libspdk_blobfs.so 00:19:41.554 CC lib/scsi/lun.o 00:19:41.554 CC lib/nbd/nbd_rpc.o 00:19:41.554 SYMLINK libspdk_lvol.so 00:19:41.554 CC lib/ftl/ftl_io.o 00:19:41.554 CC lib/scsi/port.o 00:19:41.554 CC lib/scsi/scsi.o 00:19:41.812 CC lib/nvmf/ctrlr_discovery.o 00:19:41.812 CC lib/scsi/scsi_bdev.o 00:19:41.812 CC lib/ublk/ublk_rpc.o 00:19:41.812 CC lib/scsi/scsi_pr.o 00:19:41.812 CC lib/scsi/scsi_rpc.o 00:19:41.812 LIB libspdk_nbd.a 00:19:41.812 SO libspdk_nbd.so.7.0 00:19:41.812 CC lib/ftl/ftl_sb.o 00:19:41.812 CC lib/scsi/task.o 00:19:42.070 SYMLINK libspdk_nbd.so 00:19:42.070 CC lib/ftl/ftl_l2p.o 00:19:42.070 CC lib/nvmf/ctrlr_bdev.o 00:19:42.070 CC lib/ftl/ftl_l2p_flat.o 00:19:42.070 LIB libspdk_ublk.a 00:19:42.070 CC lib/nvmf/subsystem.o 00:19:42.070 SO libspdk_ublk.so.3.0 00:19:42.328 CC lib/ftl/ftl_nv_cache.o 00:19:42.328 CC lib/nvmf/nvmf.o 00:19:42.328 CC lib/nvmf/nvmf_rpc.o 00:19:42.328 CC lib/ftl/ftl_band.o 00:19:42.328 SYMLINK libspdk_ublk.so 00:19:42.328 CC lib/nvmf/transport.o 00:19:42.328 CC lib/nvmf/tcp.o 00:19:42.585 LIB libspdk_scsi.a 00:19:42.585 SO libspdk_scsi.so.9.0 00:19:42.585 SYMLINK libspdk_scsi.so 00:19:42.585 CC lib/ftl/ftl_band_ops.o 00:19:42.843 CC lib/iscsi/conn.o 00:19:42.843 CC lib/iscsi/init_grp.o 00:19:43.100 CC lib/iscsi/iscsi.o 00:19:43.100 CC lib/iscsi/md5.o 00:19:43.357 CC lib/nvmf/stubs.o 00:19:43.357 CC lib/nvmf/mdns_server.o 00:19:43.357 CC lib/iscsi/param.o 00:19:43.357 CC lib/iscsi/portal_grp.o 00:19:43.614 CC lib/ftl/ftl_writer.o 00:19:43.871 CC lib/nvmf/rdma.o 00:19:43.871 CC lib/nvmf/auth.o 00:19:43.871 CC lib/iscsi/tgt_node.o 00:19:43.871 CC lib/iscsi/iscsi_subsystem.o 00:19:43.871 CC lib/iscsi/iscsi_rpc.o 00:19:43.871 CC lib/iscsi/task.o 00:19:43.871 CC lib/vhost/vhost.o 00:19:43.871 CC lib/ftl/ftl_rq.o 00:19:44.129 CC lib/ftl/ftl_reloc.o 00:19:44.129 CC lib/ftl/ftl_l2p_cache.o 00:19:44.386 CC lib/vhost/vhost_rpc.o 00:19:44.386 CC lib/vhost/vhost_scsi.o 00:19:44.386 CC lib/vhost/vhost_blk.o 00:19:44.642 CC lib/vhost/rte_vhost_user.o 00:19:44.642 CC lib/ftl/ftl_p2l.o 00:19:44.642 CC lib/ftl/mngt/ftl_mngt.o 00:19:44.898 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:19:44.898 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:19:44.898 LIB libspdk_iscsi.a 00:19:45.154 CC lib/ftl/mngt/ftl_mngt_startup.o 00:19:45.154 SO libspdk_iscsi.so.8.0 00:19:45.154 CC lib/ftl/mngt/ftl_mngt_md.o 00:19:45.154 CC lib/ftl/mngt/ftl_mngt_misc.o 00:19:45.154 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:19:45.154 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:19:45.154 CC lib/ftl/mngt/ftl_mngt_band.o 00:19:45.154 SYMLINK libspdk_iscsi.so 00:19:45.154 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:19:45.411 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:19:45.411 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:19:45.411 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:19:45.411 CC lib/ftl/utils/ftl_conf.o 00:19:45.411 CC lib/ftl/utils/ftl_md.o 00:19:45.411 CC lib/ftl/utils/ftl_mempool.o 00:19:45.668 CC lib/ftl/utils/ftl_bitmap.o 00:19:45.668 CC lib/ftl/utils/ftl_property.o 00:19:45.668 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:19:45.668 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:19:45.668 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:19:45.668 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:19:45.668 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:19:45.990 LIB libspdk_vhost.a 00:19:45.990 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:19:45.990 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:19:45.990 SO libspdk_vhost.so.8.0 00:19:45.990 CC lib/ftl/upgrade/ftl_sb_v3.o 00:19:45.990 CC lib/ftl/upgrade/ftl_sb_v5.o 00:19:45.990 CC lib/ftl/nvc/ftl_nvc_dev.o 00:19:45.990 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:19:45.990 CC lib/ftl/base/ftl_base_dev.o 00:19:45.990 CC lib/ftl/base/ftl_base_bdev.o 00:19:45.991 SYMLINK libspdk_vhost.so 00:19:45.991 CC lib/ftl/ftl_trace.o 00:19:46.555 LIB libspdk_ftl.a 00:19:46.555 SO libspdk_ftl.so.9.0 00:19:46.815 LIB libspdk_nvmf.a 00:19:46.815 SO libspdk_nvmf.so.18.1 00:19:47.074 SYMLINK libspdk_ftl.so 00:19:47.331 SYMLINK libspdk_nvmf.so 00:19:47.589 CC module/env_dpdk/env_dpdk_rpc.o 00:19:47.589 CC module/keyring/linux/keyring.o 00:19:47.589 CC module/scheduler/dynamic/scheduler_dynamic.o 00:19:47.589 CC module/sock/posix/posix.o 00:19:47.589 CC module/blob/bdev/blob_bdev.o 00:19:47.589 CC module/keyring/file/keyring.o 00:19:47.589 CC module/accel/ioat/accel_ioat.o 00:19:47.589 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:19:47.589 CC module/scheduler/gscheduler/gscheduler.o 00:19:47.873 CC module/accel/error/accel_error.o 00:19:47.873 LIB libspdk_env_dpdk_rpc.a 00:19:47.873 SO libspdk_env_dpdk_rpc.so.6.0 00:19:47.873 SYMLINK libspdk_env_dpdk_rpc.so 00:19:47.873 CC module/accel/error/accel_error_rpc.o 00:19:47.873 CC module/keyring/linux/keyring_rpc.o 00:19:47.873 CC module/keyring/file/keyring_rpc.o 00:19:47.873 LIB libspdk_scheduler_dpdk_governor.a 00:19:47.873 LIB libspdk_scheduler_gscheduler.a 00:19:47.873 SO libspdk_scheduler_dpdk_governor.so.4.0 00:19:47.873 SO libspdk_scheduler_gscheduler.so.4.0 00:19:47.873 LIB libspdk_scheduler_dynamic.a 00:19:47.873 SO libspdk_scheduler_dynamic.so.4.0 00:19:47.873 SYMLINK libspdk_scheduler_dpdk_governor.so 00:19:47.873 CC module/accel/ioat/accel_ioat_rpc.o 00:19:48.131 SYMLINK libspdk_scheduler_gscheduler.so 00:19:48.131 LIB libspdk_accel_error.a 00:19:48.131 LIB libspdk_keyring_linux.a 00:19:48.131 SYMLINK libspdk_scheduler_dynamic.so 00:19:48.131 LIB libspdk_blob_bdev.a 00:19:48.131 LIB libspdk_keyring_file.a 00:19:48.131 SO libspdk_accel_error.so.2.0 00:19:48.131 SO libspdk_keyring_linux.so.1.0 00:19:48.131 SO libspdk_blob_bdev.so.11.0 00:19:48.131 SO libspdk_keyring_file.so.1.0 00:19:48.131 SYMLINK libspdk_blob_bdev.so 00:19:48.131 SYMLINK libspdk_accel_error.so 00:19:48.131 SYMLINK libspdk_keyring_linux.so 00:19:48.131 SYMLINK libspdk_keyring_file.so 00:19:48.131 LIB libspdk_accel_ioat.a 00:19:48.131 CC module/accel/dsa/accel_dsa.o 00:19:48.131 CC module/accel/dsa/accel_dsa_rpc.o 00:19:48.131 CC module/accel/iaa/accel_iaa.o 00:19:48.131 CC module/accel/iaa/accel_iaa_rpc.o 00:19:48.131 SO libspdk_accel_ioat.so.6.0 00:19:48.389 SYMLINK libspdk_accel_ioat.so 00:19:48.389 CC module/bdev/gpt/gpt.o 00:19:48.389 CC module/bdev/delay/vbdev_delay.o 00:19:48.389 CC module/bdev/error/vbdev_error.o 00:19:48.389 LIB libspdk_accel_iaa.a 00:19:48.389 CC module/blobfs/bdev/blobfs_bdev.o 00:19:48.389 LIB libspdk_accel_dsa.a 00:19:48.389 SO libspdk_accel_iaa.so.3.0 00:19:48.389 CC module/bdev/lvol/vbdev_lvol.o 00:19:48.647 CC module/bdev/malloc/bdev_malloc.o 00:19:48.647 SO libspdk_accel_dsa.so.5.0 00:19:48.647 CC module/bdev/null/bdev_null.o 00:19:48.647 SYMLINK libspdk_accel_iaa.so 00:19:48.647 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:19:48.647 SYMLINK libspdk_accel_dsa.so 00:19:48.647 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:19:48.647 CC module/bdev/gpt/vbdev_gpt.o 00:19:48.647 LIB libspdk_sock_posix.a 00:19:48.647 CC module/bdev/null/bdev_null_rpc.o 00:19:48.647 SO libspdk_sock_posix.so.6.0 00:19:48.906 CC module/bdev/error/vbdev_error_rpc.o 00:19:48.906 LIB libspdk_blobfs_bdev.a 00:19:48.906 SYMLINK libspdk_sock_posix.so 00:19:48.906 SO libspdk_blobfs_bdev.so.6.0 00:19:48.906 CC module/bdev/delay/vbdev_delay_rpc.o 00:19:48.906 SYMLINK libspdk_blobfs_bdev.so 00:19:48.906 LIB libspdk_bdev_null.a 00:19:48.906 LIB libspdk_bdev_error.a 00:19:48.906 SO libspdk_bdev_null.so.6.0 00:19:48.906 SO libspdk_bdev_error.so.6.0 00:19:48.906 LIB libspdk_bdev_gpt.a 00:19:48.906 CC module/bdev/malloc/bdev_malloc_rpc.o 00:19:48.906 CC module/bdev/nvme/bdev_nvme.o 00:19:49.164 SO libspdk_bdev_gpt.so.6.0 00:19:49.164 SYMLINK libspdk_bdev_null.so 00:19:49.164 SYMLINK libspdk_bdev_error.so 00:19:49.164 CC module/bdev/nvme/bdev_nvme_rpc.o 00:19:49.164 LIB libspdk_bdev_delay.a 00:19:49.164 SYMLINK libspdk_bdev_gpt.so 00:19:49.164 LIB libspdk_bdev_lvol.a 00:19:49.164 CC module/bdev/passthru/vbdev_passthru.o 00:19:49.164 CC module/bdev/raid/bdev_raid.o 00:19:49.164 SO libspdk_bdev_delay.so.6.0 00:19:49.164 SO libspdk_bdev_lvol.so.6.0 00:19:49.164 LIB libspdk_bdev_malloc.a 00:19:49.164 SYMLINK libspdk_bdev_delay.so 00:19:49.164 SO libspdk_bdev_malloc.so.6.0 00:19:49.164 CC module/bdev/raid/bdev_raid_rpc.o 00:19:49.164 SYMLINK libspdk_bdev_lvol.so 00:19:49.164 CC module/bdev/raid/bdev_raid_sb.o 00:19:49.422 CC module/bdev/zone_block/vbdev_zone_block.o 00:19:49.422 CC module/bdev/split/vbdev_split.o 00:19:49.422 CC module/bdev/xnvme/bdev_xnvme.o 00:19:49.422 SYMLINK libspdk_bdev_malloc.so 00:19:49.422 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:19:49.681 CC module/bdev/aio/bdev_aio.o 00:19:49.681 CC module/bdev/split/vbdev_split_rpc.o 00:19:49.681 CC module/bdev/aio/bdev_aio_rpc.o 00:19:49.681 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:19:49.681 LIB libspdk_bdev_passthru.a 00:19:49.681 CC module/bdev/ftl/bdev_ftl.o 00:19:49.681 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:19:49.681 SO libspdk_bdev_passthru.so.6.0 00:19:49.681 LIB libspdk_bdev_split.a 00:19:49.681 CC module/bdev/ftl/bdev_ftl_rpc.o 00:19:49.939 SO libspdk_bdev_split.so.6.0 00:19:49.939 SYMLINK libspdk_bdev_passthru.so 00:19:49.939 LIB libspdk_bdev_xnvme.a 00:19:49.939 LIB libspdk_bdev_zone_block.a 00:19:49.939 SYMLINK libspdk_bdev_split.so 00:19:49.939 SO libspdk_bdev_xnvme.so.3.0 00:19:49.939 CC module/bdev/raid/raid0.o 00:19:49.939 SO libspdk_bdev_zone_block.so.6.0 00:19:49.939 SYMLINK libspdk_bdev_xnvme.so 00:19:49.939 CC module/bdev/nvme/nvme_rpc.o 00:19:49.939 LIB libspdk_bdev_aio.a 00:19:49.939 CC module/bdev/raid/raid1.o 00:19:50.197 LIB libspdk_bdev_ftl.a 00:19:50.197 CC module/bdev/iscsi/bdev_iscsi.o 00:19:50.197 SO libspdk_bdev_aio.so.6.0 00:19:50.197 SO libspdk_bdev_ftl.so.6.0 00:19:50.197 SYMLINK libspdk_bdev_zone_block.so 00:19:50.197 CC module/bdev/nvme/bdev_mdns_client.o 00:19:50.197 SYMLINK libspdk_bdev_aio.so 00:19:50.197 CC module/bdev/raid/concat.o 00:19:50.197 CC module/bdev/virtio/bdev_virtio_scsi.o 00:19:50.197 SYMLINK libspdk_bdev_ftl.so 00:19:50.197 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:19:50.197 CC module/bdev/nvme/vbdev_opal.o 00:19:50.456 CC module/bdev/nvme/vbdev_opal_rpc.o 00:19:50.456 CC module/bdev/virtio/bdev_virtio_blk.o 00:19:50.456 CC module/bdev/virtio/bdev_virtio_rpc.o 00:19:50.456 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:19:50.456 LIB libspdk_bdev_raid.a 00:19:50.456 LIB libspdk_bdev_iscsi.a 00:19:50.714 SO libspdk_bdev_raid.so.6.0 00:19:50.714 SO libspdk_bdev_iscsi.so.6.0 00:19:50.714 SYMLINK libspdk_bdev_iscsi.so 00:19:50.714 SYMLINK libspdk_bdev_raid.so 00:19:50.714 LIB libspdk_bdev_virtio.a 00:19:50.971 SO libspdk_bdev_virtio.so.6.0 00:19:50.971 SYMLINK libspdk_bdev_virtio.so 00:19:51.904 LIB libspdk_bdev_nvme.a 00:19:52.162 SO libspdk_bdev_nvme.so.7.0 00:19:52.162 SYMLINK libspdk_bdev_nvme.so 00:19:52.728 CC module/event/subsystems/scheduler/scheduler.o 00:19:52.728 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:19:52.728 CC module/event/subsystems/iobuf/iobuf.o 00:19:52.728 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:19:52.728 CC module/event/subsystems/sock/sock.o 00:19:52.728 CC module/event/subsystems/vmd/vmd.o 00:19:52.728 CC module/event/subsystems/keyring/keyring.o 00:19:52.728 CC module/event/subsystems/vmd/vmd_rpc.o 00:19:52.986 LIB libspdk_event_keyring.a 00:19:52.986 LIB libspdk_event_vhost_blk.a 00:19:52.986 LIB libspdk_event_scheduler.a 00:19:52.986 LIB libspdk_event_sock.a 00:19:52.986 LIB libspdk_event_iobuf.a 00:19:52.986 SO libspdk_event_keyring.so.1.0 00:19:52.986 SO libspdk_event_scheduler.so.4.0 00:19:52.986 SO libspdk_event_sock.so.5.0 00:19:52.986 SO libspdk_event_vhost_blk.so.3.0 00:19:52.986 SO libspdk_event_iobuf.so.3.0 00:19:52.986 LIB libspdk_event_vmd.a 00:19:52.986 SYMLINK libspdk_event_sock.so 00:19:52.986 SYMLINK libspdk_event_scheduler.so 00:19:52.986 SYMLINK libspdk_event_vhost_blk.so 00:19:52.986 SYMLINK libspdk_event_keyring.so 00:19:52.986 SO libspdk_event_vmd.so.6.0 00:19:52.986 SYMLINK libspdk_event_iobuf.so 00:19:52.986 SYMLINK libspdk_event_vmd.so 00:19:53.244 CC module/event/subsystems/accel/accel.o 00:19:53.503 LIB libspdk_event_accel.a 00:19:53.503 SO libspdk_event_accel.so.6.0 00:19:53.503 SYMLINK libspdk_event_accel.so 00:19:54.070 CC module/event/subsystems/bdev/bdev.o 00:19:54.070 LIB libspdk_event_bdev.a 00:19:54.070 SO libspdk_event_bdev.so.6.0 00:19:54.327 SYMLINK libspdk_event_bdev.so 00:19:54.327 CC module/event/subsystems/ublk/ublk.o 00:19:54.327 CC module/event/subsystems/scsi/scsi.o 00:19:54.327 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:19:54.327 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:19:54.327 CC module/event/subsystems/nbd/nbd.o 00:19:54.585 LIB libspdk_event_ublk.a 00:19:54.585 LIB libspdk_event_scsi.a 00:19:54.585 LIB libspdk_event_nbd.a 00:19:54.585 SO libspdk_event_ublk.so.3.0 00:19:54.585 SO libspdk_event_scsi.so.6.0 00:19:54.585 SO libspdk_event_nbd.so.6.0 00:19:54.585 SYMLINK libspdk_event_ublk.so 00:19:54.585 SYMLINK libspdk_event_scsi.so 00:19:54.585 LIB libspdk_event_nvmf.a 00:19:54.843 SYMLINK libspdk_event_nbd.so 00:19:54.843 SO libspdk_event_nvmf.so.6.0 00:19:54.843 SYMLINK libspdk_event_nvmf.so 00:19:54.843 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:19:54.843 CC module/event/subsystems/iscsi/iscsi.o 00:19:55.100 LIB libspdk_event_vhost_scsi.a 00:19:55.100 LIB libspdk_event_iscsi.a 00:19:55.100 SO libspdk_event_iscsi.so.6.0 00:19:55.100 SO libspdk_event_vhost_scsi.so.3.0 00:19:55.357 SYMLINK libspdk_event_iscsi.so 00:19:55.357 SYMLINK libspdk_event_vhost_scsi.so 00:19:55.357 SO libspdk.so.6.0 00:19:55.357 SYMLINK libspdk.so 00:19:55.614 TEST_HEADER include/spdk/accel.h 00:19:55.614 TEST_HEADER include/spdk/accel_module.h 00:19:55.614 TEST_HEADER include/spdk/assert.h 00:19:55.614 CC test/rpc_client/rpc_client_test.o 00:19:55.614 TEST_HEADER include/spdk/barrier.h 00:19:55.614 CXX app/trace/trace.o 00:19:55.614 TEST_HEADER include/spdk/base64.h 00:19:55.614 TEST_HEADER include/spdk/bdev.h 00:19:55.614 TEST_HEADER include/spdk/bdev_module.h 00:19:55.614 TEST_HEADER include/spdk/bdev_zone.h 00:19:55.614 CC app/trace_record/trace_record.o 00:19:55.614 TEST_HEADER include/spdk/bit_array.h 00:19:55.614 TEST_HEADER include/spdk/bit_pool.h 00:19:55.614 TEST_HEADER include/spdk/blob_bdev.h 00:19:55.614 TEST_HEADER include/spdk/blobfs_bdev.h 00:19:55.614 TEST_HEADER include/spdk/blobfs.h 00:19:55.614 TEST_HEADER include/spdk/blob.h 00:19:55.614 TEST_HEADER include/spdk/conf.h 00:19:55.614 TEST_HEADER include/spdk/config.h 00:19:55.614 TEST_HEADER include/spdk/cpuset.h 00:19:55.614 TEST_HEADER include/spdk/crc16.h 00:19:55.614 TEST_HEADER include/spdk/crc32.h 00:19:55.614 TEST_HEADER include/spdk/crc64.h 00:19:55.614 TEST_HEADER include/spdk/dif.h 00:19:55.614 TEST_HEADER include/spdk/dma.h 00:19:55.614 TEST_HEADER include/spdk/endian.h 00:19:55.614 TEST_HEADER include/spdk/env_dpdk.h 00:19:55.614 TEST_HEADER include/spdk/env.h 00:19:55.614 TEST_HEADER include/spdk/event.h 00:19:55.614 TEST_HEADER include/spdk/fd_group.h 00:19:55.614 TEST_HEADER include/spdk/fd.h 00:19:55.614 TEST_HEADER include/spdk/file.h 00:19:55.614 TEST_HEADER include/spdk/ftl.h 00:19:55.873 TEST_HEADER include/spdk/gpt_spec.h 00:19:55.873 TEST_HEADER include/spdk/hexlify.h 00:19:55.873 TEST_HEADER include/spdk/histogram_data.h 00:19:55.873 TEST_HEADER include/spdk/idxd.h 00:19:55.873 TEST_HEADER include/spdk/idxd_spec.h 00:19:55.873 TEST_HEADER include/spdk/init.h 00:19:55.873 CC app/nvmf_tgt/nvmf_main.o 00:19:55.873 TEST_HEADER include/spdk/ioat.h 00:19:55.873 TEST_HEADER include/spdk/ioat_spec.h 00:19:55.873 TEST_HEADER include/spdk/iscsi_spec.h 00:19:55.873 TEST_HEADER include/spdk/json.h 00:19:55.873 TEST_HEADER include/spdk/jsonrpc.h 00:19:55.873 TEST_HEADER include/spdk/keyring.h 00:19:55.873 TEST_HEADER include/spdk/keyring_module.h 00:19:55.873 CC test/thread/poller_perf/poller_perf.o 00:19:55.873 TEST_HEADER include/spdk/likely.h 00:19:55.873 TEST_HEADER include/spdk/log.h 00:19:55.873 TEST_HEADER include/spdk/lvol.h 00:19:55.873 TEST_HEADER include/spdk/memory.h 00:19:55.873 TEST_HEADER include/spdk/mmio.h 00:19:55.873 TEST_HEADER include/spdk/nbd.h 00:19:55.873 TEST_HEADER include/spdk/notify.h 00:19:55.873 TEST_HEADER include/spdk/nvme.h 00:19:55.873 TEST_HEADER include/spdk/nvme_intel.h 00:19:55.873 TEST_HEADER include/spdk/nvme_ocssd.h 00:19:55.873 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:19:55.873 CC examples/util/zipf/zipf.o 00:19:55.873 TEST_HEADER include/spdk/nvme_spec.h 00:19:55.873 TEST_HEADER include/spdk/nvme_zns.h 00:19:55.873 TEST_HEADER include/spdk/nvmf_cmd.h 00:19:55.873 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:19:55.873 TEST_HEADER include/spdk/nvmf.h 00:19:55.873 TEST_HEADER include/spdk/nvmf_spec.h 00:19:55.873 TEST_HEADER include/spdk/nvmf_transport.h 00:19:55.873 CC test/dma/test_dma/test_dma.o 00:19:55.873 TEST_HEADER include/spdk/opal.h 00:19:55.873 TEST_HEADER include/spdk/opal_spec.h 00:19:55.873 TEST_HEADER include/spdk/pci_ids.h 00:19:55.873 TEST_HEADER include/spdk/pipe.h 00:19:55.873 CC test/app/bdev_svc/bdev_svc.o 00:19:55.873 TEST_HEADER include/spdk/queue.h 00:19:55.873 TEST_HEADER include/spdk/reduce.h 00:19:55.873 TEST_HEADER include/spdk/rpc.h 00:19:55.873 TEST_HEADER include/spdk/scheduler.h 00:19:55.873 TEST_HEADER include/spdk/scsi.h 00:19:55.873 TEST_HEADER include/spdk/scsi_spec.h 00:19:55.873 TEST_HEADER include/spdk/sock.h 00:19:55.873 TEST_HEADER include/spdk/stdinc.h 00:19:55.873 TEST_HEADER include/spdk/string.h 00:19:55.873 TEST_HEADER include/spdk/thread.h 00:19:55.873 TEST_HEADER include/spdk/trace.h 00:19:55.873 TEST_HEADER include/spdk/trace_parser.h 00:19:55.873 TEST_HEADER include/spdk/tree.h 00:19:55.873 TEST_HEADER include/spdk/ublk.h 00:19:55.873 TEST_HEADER include/spdk/util.h 00:19:55.873 TEST_HEADER include/spdk/uuid.h 00:19:55.873 TEST_HEADER include/spdk/version.h 00:19:55.873 TEST_HEADER include/spdk/vfio_user_pci.h 00:19:55.873 TEST_HEADER include/spdk/vfio_user_spec.h 00:19:55.873 TEST_HEADER include/spdk/vhost.h 00:19:55.873 TEST_HEADER include/spdk/vmd.h 00:19:55.873 TEST_HEADER include/spdk/xor.h 00:19:55.873 TEST_HEADER include/spdk/zipf.h 00:19:55.873 CXX test/cpp_headers/accel.o 00:19:55.873 CC test/env/mem_callbacks/mem_callbacks.o 00:19:55.873 LINK rpc_client_test 00:19:55.873 LINK poller_perf 00:19:56.132 LINK nvmf_tgt 00:19:56.132 LINK zipf 00:19:56.132 LINK spdk_trace_record 00:19:56.132 LINK bdev_svc 00:19:56.132 CXX test/cpp_headers/accel_module.o 00:19:56.132 CXX test/cpp_headers/assert.o 00:19:56.132 LINK spdk_trace 00:19:56.390 CC test/env/vtophys/vtophys.o 00:19:56.390 LINK test_dma 00:19:56.390 CXX test/cpp_headers/barrier.o 00:19:56.390 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:19:56.390 CC examples/ioat/perf/perf.o 00:19:56.649 LINK vtophys 00:19:56.649 CC app/iscsi_tgt/iscsi_tgt.o 00:19:56.649 CC examples/vmd/lsvmd/lsvmd.o 00:19:56.649 CXX test/cpp_headers/base64.o 00:19:56.649 LINK env_dpdk_post_init 00:19:56.649 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:19:56.649 LINK mem_callbacks 00:19:56.649 CC examples/ioat/verify/verify.o 00:19:56.649 CXX test/cpp_headers/bdev.o 00:19:56.649 LINK lsvmd 00:19:56.908 LINK iscsi_tgt 00:19:56.908 LINK ioat_perf 00:19:56.908 CXX test/cpp_headers/bdev_module.o 00:19:56.908 CXX test/cpp_headers/bdev_zone.o 00:19:56.908 CC test/env/memory/memory_ut.o 00:19:56.908 CC test/env/pci/pci_ut.o 00:19:56.908 LINK verify 00:19:57.166 CXX test/cpp_headers/bit_array.o 00:19:57.166 CC examples/vmd/led/led.o 00:19:57.166 CC test/event/event_perf/event_perf.o 00:19:57.166 LINK nvme_fuzz 00:19:57.166 CC test/event/reactor/reactor.o 00:19:57.166 CC test/event/reactor_perf/reactor_perf.o 00:19:57.166 CXX test/cpp_headers/bit_pool.o 00:19:57.453 LINK led 00:19:57.453 CC test/event/app_repeat/app_repeat.o 00:19:57.453 CC app/spdk_tgt/spdk_tgt.o 00:19:57.453 LINK reactor 00:19:57.454 LINK reactor_perf 00:19:57.454 LINK event_perf 00:19:57.454 CXX test/cpp_headers/blob_bdev.o 00:19:57.454 LINK pci_ut 00:19:57.454 LINK app_repeat 00:19:57.711 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:19:57.711 CXX test/cpp_headers/blobfs_bdev.o 00:19:57.711 LINK spdk_tgt 00:19:57.970 CC examples/interrupt_tgt/interrupt_tgt.o 00:19:57.970 CC examples/idxd/perf/perf.o 00:19:57.970 CC test/nvme/aer/aer.o 00:19:57.970 CXX test/cpp_headers/blobfs.o 00:19:57.970 CXX test/cpp_headers/blob.o 00:19:57.970 CC test/event/scheduler/scheduler.o 00:19:57.970 CC test/accel/dif/dif.o 00:19:57.970 CC app/spdk_lspci/spdk_lspci.o 00:19:58.228 LINK interrupt_tgt 00:19:58.228 CXX test/cpp_headers/conf.o 00:19:58.228 LINK scheduler 00:19:58.228 LINK spdk_lspci 00:19:58.228 LINK aer 00:19:58.228 CC examples/thread/thread/thread_ex.o 00:19:58.228 CXX test/cpp_headers/config.o 00:19:58.228 LINK memory_ut 00:19:58.488 CXX test/cpp_headers/cpuset.o 00:19:58.488 CC test/nvme/reset/reset.o 00:19:58.488 LINK idxd_perf 00:19:58.488 LINK dif 00:19:58.488 CC app/spdk_nvme_perf/perf.o 00:19:58.488 CXX test/cpp_headers/crc16.o 00:19:58.488 CC test/nvme/sgl/sgl.o 00:19:58.746 LINK thread 00:19:58.746 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:19:58.746 CC app/spdk_nvme_identify/identify.o 00:19:58.746 CC examples/sock/hello_world/hello_sock.o 00:19:58.746 LINK reset 00:19:58.746 CXX test/cpp_headers/crc32.o 00:19:58.746 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:19:59.004 CXX test/cpp_headers/crc64.o 00:19:59.004 LINK sgl 00:19:59.004 CC app/spdk_nvme_discover/discovery_aer.o 00:19:59.004 LINK hello_sock 00:19:59.004 CC test/nvme/e2edp/nvme_dp.o 00:19:59.004 CC test/nvme/overhead/overhead.o 00:19:59.004 CXX test/cpp_headers/dif.o 00:19:59.004 CXX test/cpp_headers/dma.o 00:19:59.263 LINK spdk_nvme_discover 00:19:59.263 CXX test/cpp_headers/endian.o 00:19:59.263 LINK vhost_fuzz 00:19:59.521 LINK nvme_dp 00:19:59.521 LINK overhead 00:19:59.521 CC examples/accel/perf/accel_perf.o 00:19:59.521 CXX test/cpp_headers/env_dpdk.o 00:19:59.521 CC examples/blob/hello_world/hello_blob.o 00:19:59.521 CXX test/cpp_headers/env.o 00:19:59.521 CC examples/blob/cli/blobcli.o 00:19:59.779 CXX test/cpp_headers/event.o 00:19:59.779 LINK spdk_nvme_perf 00:19:59.779 CXX test/cpp_headers/fd_group.o 00:19:59.779 CC test/nvme/err_injection/err_injection.o 00:19:59.779 LINK hello_blob 00:20:00.038 CC examples/nvme/hello_world/hello_world.o 00:20:00.038 LINK spdk_nvme_identify 00:20:00.038 CXX test/cpp_headers/fd.o 00:20:00.038 CC examples/nvme/reconnect/reconnect.o 00:20:00.038 LINK iscsi_fuzz 00:20:00.038 LINK err_injection 00:20:00.038 CC examples/nvme/nvme_manage/nvme_manage.o 00:20:00.038 LINK accel_perf 00:20:00.346 CXX test/cpp_headers/file.o 00:20:00.346 CC examples/nvme/arbitration/arbitration.o 00:20:00.346 LINK hello_world 00:20:00.346 LINK blobcli 00:20:00.346 CC app/spdk_top/spdk_top.o 00:20:00.346 CXX test/cpp_headers/ftl.o 00:20:00.346 CC examples/nvme/hotplug/hotplug.o 00:20:00.346 LINK reconnect 00:20:00.346 CC test/nvme/startup/startup.o 00:20:00.346 CC test/app/histogram_perf/histogram_perf.o 00:20:00.604 CC test/app/jsoncat/jsoncat.o 00:20:00.604 CXX test/cpp_headers/gpt_spec.o 00:20:00.604 LINK startup 00:20:00.604 CC examples/nvme/cmb_copy/cmb_copy.o 00:20:00.604 LINK arbitration 00:20:00.604 LINK histogram_perf 00:20:00.604 LINK jsoncat 00:20:00.604 LINK hotplug 00:20:00.927 CC examples/nvme/abort/abort.o 00:20:00.927 CXX test/cpp_headers/hexlify.o 00:20:00.927 LINK nvme_manage 00:20:00.927 LINK cmb_copy 00:20:00.927 CXX test/cpp_headers/histogram_data.o 00:20:00.927 CC test/nvme/reserve/reserve.o 00:20:00.927 CC test/app/stub/stub.o 00:20:00.927 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:20:01.186 CC test/nvme/simple_copy/simple_copy.o 00:20:01.186 CXX test/cpp_headers/idxd.o 00:20:01.186 CC test/nvme/connect_stress/connect_stress.o 00:20:01.186 LINK stub 00:20:01.186 LINK pmr_persistence 00:20:01.186 CC test/blobfs/mkfs/mkfs.o 00:20:01.186 LINK reserve 00:20:01.186 LINK abort 00:20:01.186 CC app/vhost/vhost.o 00:20:01.445 CXX test/cpp_headers/idxd_spec.o 00:20:01.445 LINK connect_stress 00:20:01.445 LINK simple_copy 00:20:01.445 LINK mkfs 00:20:01.445 LINK spdk_top 00:20:01.445 LINK vhost 00:20:01.445 CXX test/cpp_headers/init.o 00:20:01.445 CC test/nvme/boot_partition/boot_partition.o 00:20:01.704 CC app/spdk_dd/spdk_dd.o 00:20:01.704 CC examples/bdev/hello_world/hello_bdev.o 00:20:01.704 CXX test/cpp_headers/ioat.o 00:20:01.704 CC test/nvme/compliance/nvme_compliance.o 00:20:01.704 CC test/lvol/esnap/esnap.o 00:20:01.704 CXX test/cpp_headers/ioat_spec.o 00:20:01.704 LINK boot_partition 00:20:01.961 CC app/fio/nvme/fio_plugin.o 00:20:01.962 CC examples/bdev/bdevperf/bdevperf.o 00:20:01.962 CC test/bdev/bdevio/bdevio.o 00:20:01.962 CXX test/cpp_headers/iscsi_spec.o 00:20:01.962 LINK hello_bdev 00:20:01.962 CC app/fio/bdev/fio_plugin.o 00:20:01.962 CC test/nvme/fused_ordering/fused_ordering.o 00:20:02.219 LINK spdk_dd 00:20:02.219 CXX test/cpp_headers/json.o 00:20:02.219 LINK nvme_compliance 00:20:02.219 CXX test/cpp_headers/jsonrpc.o 00:20:02.219 LINK fused_ordering 00:20:02.477 CC test/nvme/doorbell_aers/doorbell_aers.o 00:20:02.477 LINK bdevio 00:20:02.478 CC test/nvme/fdp/fdp.o 00:20:02.478 CC test/nvme/cuse/cuse.o 00:20:02.478 CXX test/cpp_headers/keyring.o 00:20:02.478 CXX test/cpp_headers/keyring_module.o 00:20:02.736 CXX test/cpp_headers/likely.o 00:20:02.736 LINK doorbell_aers 00:20:02.736 LINK spdk_nvme 00:20:02.736 LINK spdk_bdev 00:20:02.736 CXX test/cpp_headers/log.o 00:20:02.736 CXX test/cpp_headers/lvol.o 00:20:02.736 CXX test/cpp_headers/memory.o 00:20:02.736 CXX test/cpp_headers/mmio.o 00:20:02.736 CXX test/cpp_headers/nbd.o 00:20:02.736 CXX test/cpp_headers/notify.o 00:20:02.736 CXX test/cpp_headers/nvme.o 00:20:02.736 LINK bdevperf 00:20:02.736 CXX test/cpp_headers/nvme_intel.o 00:20:02.995 LINK fdp 00:20:02.995 CXX test/cpp_headers/nvme_ocssd.o 00:20:02.995 CXX test/cpp_headers/nvme_ocssd_spec.o 00:20:02.995 CXX test/cpp_headers/nvme_spec.o 00:20:02.995 CXX test/cpp_headers/nvme_zns.o 00:20:02.995 CXX test/cpp_headers/nvmf_cmd.o 00:20:02.995 CXX test/cpp_headers/nvmf_fc_spec.o 00:20:02.995 CXX test/cpp_headers/nvmf.o 00:20:02.995 CXX test/cpp_headers/nvmf_spec.o 00:20:03.253 CXX test/cpp_headers/nvmf_transport.o 00:20:03.253 CXX test/cpp_headers/opal.o 00:20:03.253 CXX test/cpp_headers/opal_spec.o 00:20:03.253 CXX test/cpp_headers/pci_ids.o 00:20:03.253 CXX test/cpp_headers/pipe.o 00:20:03.253 CXX test/cpp_headers/queue.o 00:20:03.253 CXX test/cpp_headers/reduce.o 00:20:03.253 CXX test/cpp_headers/rpc.o 00:20:03.253 CC examples/nvmf/nvmf/nvmf.o 00:20:03.512 CXX test/cpp_headers/scheduler.o 00:20:03.512 CXX test/cpp_headers/scsi.o 00:20:03.512 CXX test/cpp_headers/scsi_spec.o 00:20:03.512 CXX test/cpp_headers/sock.o 00:20:03.512 CXX test/cpp_headers/stdinc.o 00:20:03.512 CXX test/cpp_headers/string.o 00:20:03.512 CXX test/cpp_headers/thread.o 00:20:03.512 CXX test/cpp_headers/trace.o 00:20:03.771 CXX test/cpp_headers/trace_parser.o 00:20:03.771 CXX test/cpp_headers/tree.o 00:20:03.771 CXX test/cpp_headers/ublk.o 00:20:03.771 CXX test/cpp_headers/util.o 00:20:03.771 CXX test/cpp_headers/uuid.o 00:20:03.771 CXX test/cpp_headers/version.o 00:20:03.771 CXX test/cpp_headers/vfio_user_pci.o 00:20:03.771 LINK nvmf 00:20:03.771 CXX test/cpp_headers/vfio_user_spec.o 00:20:03.771 CXX test/cpp_headers/vhost.o 00:20:03.771 CXX test/cpp_headers/vmd.o 00:20:03.771 CXX test/cpp_headers/xor.o 00:20:03.771 CXX test/cpp_headers/zipf.o 00:20:04.338 LINK cuse 00:20:08.606 LINK esnap 00:20:09.172 00:20:09.172 real 1m21.738s 00:20:09.172 user 7m44.952s 00:20:09.172 sys 1m54.029s 00:20:09.172 07:28:47 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:20:09.172 07:28:47 make -- common/autotest_common.sh@10 -- $ set +x 00:20:09.172 ************************************ 00:20:09.172 END TEST make 00:20:09.172 ************************************ 00:20:09.172 07:28:47 -- common/autotest_common.sh@1142 -- $ return 0 00:20:09.172 07:28:47 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:20:09.172 07:28:47 -- pm/common@29 -- $ signal_monitor_resources TERM 00:20:09.172 07:28:47 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:20:09.172 07:28:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:09.172 07:28:47 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:20:09.172 07:28:47 -- pm/common@44 -- $ pid=5241 00:20:09.172 07:28:47 -- pm/common@50 -- $ kill -TERM 5241 00:20:09.172 07:28:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:09.172 07:28:47 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:20:09.172 07:28:47 -- pm/common@44 -- $ pid=5243 00:20:09.172 07:28:47 -- pm/common@50 -- $ kill -TERM 5243 00:20:09.430 07:28:47 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:09.430 07:28:47 -- nvmf/common.sh@7 -- # uname -s 00:20:09.430 07:28:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:09.430 07:28:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:09.430 07:28:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:09.430 07:28:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:09.430 07:28:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:09.430 07:28:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:09.430 07:28:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:09.430 07:28:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:09.430 07:28:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:09.430 07:28:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:09.430 07:28:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2f109166-b1ec-48bc-8a74-71b6d6599bfb 00:20:09.430 07:28:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=2f109166-b1ec-48bc-8a74-71b6d6599bfb 00:20:09.430 07:28:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:09.430 07:28:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:09.430 07:28:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:20:09.430 07:28:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:09.430 07:28:47 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:09.430 07:28:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:09.430 07:28:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:09.430 07:28:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:09.430 07:28:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.430 07:28:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.430 07:28:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.430 07:28:47 -- paths/export.sh@5 -- # export PATH 00:20:09.430 07:28:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.430 07:28:47 -- nvmf/common.sh@47 -- # : 0 00:20:09.430 07:28:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:09.430 07:28:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:09.430 07:28:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:09.430 07:28:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:09.430 07:28:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:09.430 07:28:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:09.430 07:28:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:09.430 07:28:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:09.430 07:28:47 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:20:09.430 07:28:47 -- spdk/autotest.sh@32 -- # uname -s 00:20:09.430 07:28:47 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:20:09.430 07:28:47 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:20:09.430 07:28:47 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:20:09.430 07:28:47 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:20:09.430 07:28:47 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:20:09.430 07:28:47 -- spdk/autotest.sh@44 -- # modprobe nbd 00:20:09.430 07:28:47 -- spdk/autotest.sh@46 -- # type -P udevadm 00:20:09.430 07:28:47 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:20:09.430 07:28:47 -- spdk/autotest.sh@48 -- # udevadm_pid=53807 00:20:09.430 07:28:47 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:20:09.430 07:28:47 -- pm/common@17 -- # local monitor 00:20:09.430 07:28:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:20:09.430 07:28:47 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:20:09.430 07:28:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:20:09.430 07:28:47 -- pm/common@21 -- # date +%s 00:20:09.430 07:28:47 -- pm/common@25 -- # sleep 1 00:20:09.431 07:28:47 -- pm/common@21 -- # date +%s 00:20:09.431 07:28:47 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721028527 00:20:09.431 07:28:47 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721028527 00:20:09.431 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721028527_collect-vmstat.pm.log 00:20:09.431 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721028527_collect-cpu-load.pm.log 00:20:10.363 07:28:48 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:20:10.363 07:28:48 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:20:10.363 07:28:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:10.363 07:28:48 -- common/autotest_common.sh@10 -- # set +x 00:20:10.363 07:28:48 -- spdk/autotest.sh@59 -- # create_test_list 00:20:10.363 07:28:48 -- common/autotest_common.sh@746 -- # xtrace_disable 00:20:10.363 07:28:48 -- common/autotest_common.sh@10 -- # set +x 00:20:10.363 07:28:48 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:20:10.363 07:28:48 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:20:10.363 07:28:48 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:20:10.363 07:28:48 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:20:10.363 07:28:48 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:20:10.363 07:28:48 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:20:10.363 07:28:48 -- common/autotest_common.sh@1455 -- # uname 00:20:10.363 07:28:48 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:20:10.363 07:28:48 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:20:10.363 07:28:48 -- common/autotest_common.sh@1475 -- # uname 00:20:10.363 07:28:48 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:20:10.363 07:28:48 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:20:10.620 07:28:48 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:20:10.620 07:28:48 -- spdk/autotest.sh@72 -- # hash lcov 00:20:10.620 07:28:48 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:20:10.620 07:28:48 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:20:10.620 --rc lcov_branch_coverage=1 00:20:10.620 --rc lcov_function_coverage=1 00:20:10.620 --rc genhtml_branch_coverage=1 00:20:10.620 --rc genhtml_function_coverage=1 00:20:10.620 --rc genhtml_legend=1 00:20:10.620 --rc geninfo_all_blocks=1 00:20:10.620 ' 00:20:10.620 07:28:48 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:20:10.620 --rc lcov_branch_coverage=1 00:20:10.620 --rc lcov_function_coverage=1 00:20:10.620 --rc genhtml_branch_coverage=1 00:20:10.620 --rc genhtml_function_coverage=1 00:20:10.620 --rc genhtml_legend=1 00:20:10.620 --rc geninfo_all_blocks=1 00:20:10.620 ' 00:20:10.620 07:28:48 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:20:10.620 --rc lcov_branch_coverage=1 00:20:10.620 --rc lcov_function_coverage=1 00:20:10.620 --rc genhtml_branch_coverage=1 00:20:10.620 --rc genhtml_function_coverage=1 00:20:10.620 --rc genhtml_legend=1 00:20:10.620 --rc geninfo_all_blocks=1 00:20:10.620 --no-external' 00:20:10.620 07:28:48 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:20:10.620 --rc lcov_branch_coverage=1 00:20:10.620 --rc lcov_function_coverage=1 00:20:10.620 --rc genhtml_branch_coverage=1 00:20:10.620 --rc genhtml_function_coverage=1 00:20:10.620 --rc genhtml_legend=1 00:20:10.620 --rc geninfo_all_blocks=1 00:20:10.620 --no-external' 00:20:10.620 07:28:48 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:20:10.620 lcov: LCOV version 1.14 00:20:10.620 07:28:49 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:20:28.691 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:20:28.691 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:20:38.663 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:20:38.663 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:20:38.664 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:20:38.664 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:20:41.949 07:29:20 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:20:41.949 07:29:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:41.949 07:29:20 -- common/autotest_common.sh@10 -- # set +x 00:20:41.949 07:29:20 -- spdk/autotest.sh@91 -- # rm -f 00:20:41.949 07:29:20 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:42.208 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:42.775 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:20:42.775 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:20:42.775 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:20:42.775 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:20:43.033 07:29:21 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:20:43.033 07:29:21 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:20:43.033 07:29:21 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:20:43.033 07:29:21 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:20:43.033 07:29:21 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:20:43.033 07:29:21 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:20:43.033 07:29:21 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:20:43.033 07:29:21 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:43.033 07:29:21 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:43.033 07:29:21 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:20:43.033 07:29:21 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:20:43.033 07:29:21 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:20:43.033 07:29:21 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:43.033 07:29:21 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:43.033 07:29:21 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:20:43.033 07:29:21 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:20:43.033 07:29:21 -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:20:43.033 07:29:21 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:20:43.033 07:29:21 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:43.034 07:29:21 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:20:43.034 07:29:21 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:20:43.034 07:29:21 -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:20:43.034 07:29:21 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:20:43.034 07:29:21 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:43.034 07:29:21 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:20:43.034 07:29:21 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:20:43.034 07:29:21 -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:20:43.034 07:29:21 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:20:43.034 07:29:21 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:43.034 07:29:21 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:20:43.034 07:29:21 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:20:43.034 07:29:21 -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:20:43.034 07:29:21 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:20:43.034 07:29:21 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:43.034 07:29:21 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:20:43.034 07:29:21 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:20:43.034 07:29:21 -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:20:43.034 07:29:21 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:20:43.034 07:29:21 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:43.034 07:29:21 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:20:43.034 07:29:21 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:20:43.034 07:29:21 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:20:43.034 07:29:21 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:20:43.034 07:29:21 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:20:43.034 07:29:21 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:20:43.034 No valid GPT data, bailing 00:20:43.034 07:29:21 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:43.034 07:29:21 -- scripts/common.sh@391 -- # pt= 00:20:43.034 07:29:21 -- scripts/common.sh@392 -- # return 1 00:20:43.034 07:29:21 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:20:43.034 1+0 records in 00:20:43.034 1+0 records out 00:20:43.034 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136814 s, 76.6 MB/s 00:20:43.034 07:29:21 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:20:43.034 07:29:21 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:20:43.034 07:29:21 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:20:43.034 07:29:21 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:20:43.034 07:29:21 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:20:43.034 No valid GPT data, bailing 00:20:43.034 07:29:21 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:43.034 07:29:21 -- scripts/common.sh@391 -- # pt= 00:20:43.034 07:29:21 -- scripts/common.sh@392 -- # return 1 00:20:43.034 07:29:21 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:20:43.034 1+0 records in 00:20:43.034 1+0 records out 00:20:43.034 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00469592 s, 223 MB/s 00:20:43.034 07:29:21 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:20:43.034 07:29:21 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:20:43.034 07:29:21 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:20:43.034 07:29:21 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:20:43.034 07:29:21 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:20:43.034 No valid GPT data, bailing 00:20:43.292 07:29:21 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:20:43.292 07:29:21 -- scripts/common.sh@391 -- # pt= 00:20:43.292 07:29:21 -- scripts/common.sh@392 -- # return 1 00:20:43.292 07:29:21 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:20:43.292 1+0 records in 00:20:43.292 1+0 records out 00:20:43.292 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00450378 s, 233 MB/s 00:20:43.292 07:29:21 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:20:43.292 07:29:21 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:20:43.292 07:29:21 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n2 00:20:43.292 07:29:21 -- scripts/common.sh@378 -- # local block=/dev/nvme2n2 pt 00:20:43.292 07:29:21 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:20:43.292 No valid GPT data, bailing 00:20:43.292 07:29:21 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:20:43.292 07:29:21 -- scripts/common.sh@391 -- # pt= 00:20:43.292 07:29:21 -- scripts/common.sh@392 -- # return 1 00:20:43.292 07:29:21 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:20:43.292 1+0 records in 00:20:43.292 1+0 records out 00:20:43.292 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00460074 s, 228 MB/s 00:20:43.292 07:29:21 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:20:43.292 07:29:21 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:20:43.292 07:29:21 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n3 00:20:43.292 07:29:21 -- scripts/common.sh@378 -- # local block=/dev/nvme2n3 pt 00:20:43.292 07:29:21 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:20:43.292 No valid GPT data, bailing 00:20:43.292 07:29:21 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:20:43.292 07:29:21 -- scripts/common.sh@391 -- # pt= 00:20:43.292 07:29:21 -- scripts/common.sh@392 -- # return 1 00:20:43.292 07:29:21 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:20:43.292 1+0 records in 00:20:43.292 1+0 records out 00:20:43.292 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00508702 s, 206 MB/s 00:20:43.292 07:29:21 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:20:43.292 07:29:21 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:20:43.292 07:29:21 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:20:43.292 07:29:21 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:20:43.292 07:29:21 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:20:43.292 No valid GPT data, bailing 00:20:43.292 07:29:21 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:20:43.292 07:29:21 -- scripts/common.sh@391 -- # pt= 00:20:43.292 07:29:21 -- scripts/common.sh@392 -- # return 1 00:20:43.292 07:29:21 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:20:43.550 1+0 records in 00:20:43.550 1+0 records out 00:20:43.550 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00521073 s, 201 MB/s 00:20:43.550 07:29:21 -- spdk/autotest.sh@118 -- # sync 00:20:43.550 07:29:21 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:20:43.550 07:29:21 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:20:43.550 07:29:21 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:20:45.467 07:29:23 -- spdk/autotest.sh@124 -- # uname -s 00:20:45.467 07:29:23 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:20:45.467 07:29:23 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:20:45.467 07:29:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:45.467 07:29:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:45.467 07:29:23 -- common/autotest_common.sh@10 -- # set +x 00:20:45.467 ************************************ 00:20:45.467 START TEST setup.sh 00:20:45.467 ************************************ 00:20:45.467 07:29:23 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:20:45.467 * Looking for test storage... 00:20:45.467 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:20:45.467 07:29:23 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:20:45.467 07:29:23 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:20:45.467 07:29:23 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:20:45.467 07:29:23 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:45.467 07:29:23 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:45.467 07:29:23 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:20:45.467 ************************************ 00:20:45.467 START TEST acl 00:20:45.467 ************************************ 00:20:45.467 07:29:23 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:20:45.467 * Looking for test storage... 00:20:45.467 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:20:45.467 07:29:24 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:20:45.467 07:29:24 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:20:45.467 07:29:24 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:20:45.467 07:29:24 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:20:45.467 07:29:24 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:20:45.467 07:29:24 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:20:45.467 07:29:24 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:20:45.467 07:29:24 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:45.467 07:29:24 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:45.467 07:29:24 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:20:45.467 07:29:24 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:20:45.467 07:29:24 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:20:45.467 07:29:24 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:45.467 07:29:24 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:45.467 07:29:24 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:20:45.467 07:29:24 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:20:45.467 07:29:24 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:20:45.467 07:29:24 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:20:45.467 07:29:24 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:45.467 07:29:24 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:20:45.467 07:29:24 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:20:45.467 07:29:24 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:20:45.467 07:29:24 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:20:45.467 07:29:24 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:45.467 07:29:24 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:20:45.467 07:29:24 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:20:45.467 07:29:24 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:20:45.467 07:29:24 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:20:45.468 07:29:24 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:45.468 07:29:24 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:20:45.468 07:29:24 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:20:45.468 07:29:24 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:20:45.468 07:29:24 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:20:45.468 07:29:24 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:45.468 07:29:24 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:20:45.468 07:29:24 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:20:45.468 07:29:24 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:20:45.468 07:29:24 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:20:45.468 07:29:24 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:45.468 07:29:24 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:20:45.468 07:29:24 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:20:45.468 07:29:24 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:20:45.468 07:29:24 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:20:45.468 07:29:24 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:20:45.468 07:29:24 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:20:45.468 07:29:24 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:46.843 07:29:25 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:20:46.843 07:29:25 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:20:46.843 07:29:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:46.843 07:29:25 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:20:46.843 07:29:25 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:20:46.843 07:29:25 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:20:47.101 07:29:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:20:47.101 07:29:25 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:20:47.101 07:29:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:47.684 Hugepages 00:20:47.684 node hugesize free / total 00:20:47.684 07:29:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:20:47.684 07:29:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:20:47.684 07:29:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:47.684 00:20:47.684 Type BDF Vendor Device NUMA Driver Device Block devices 00:20:47.684 07:29:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:20:47.684 07:29:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:20:47.684 07:29:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:47.684 07:29:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:20:47.684 07:29:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:20:47.684 07:29:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:20:47.684 07:29:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:47.942 07:29:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:20:47.942 07:29:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:20:47.942 07:29:26 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:20:47.942 07:29:26 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:20:47.942 07:29:26 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:20:47.942 07:29:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:47.942 07:29:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:20:47.942 07:29:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:20:47.942 07:29:26 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:20:47.942 07:29:26 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:20:47.942 07:29:26 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:20:47.942 07:29:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:47.942 07:29:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:20:47.942 07:29:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:20:47.942 07:29:26 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:20:47.942 07:29:26 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:20:47.942 07:29:26 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:20:47.942 07:29:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:47.942 07:29:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:20:47.942 07:29:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:20:47.942 07:29:26 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:20:47.942 07:29:26 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:20:47.942 07:29:26 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:20:47.942 07:29:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:20:47.942 07:29:26 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:20:47.942 07:29:26 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:20:47.942 07:29:26 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:47.942 07:29:26 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:47.942 07:29:26 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:20:47.942 ************************************ 00:20:47.942 START TEST denied 00:20:47.942 ************************************ 00:20:47.942 07:29:26 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:20:47.942 07:29:26 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:20:47.942 07:29:26 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:20:47.942 07:29:26 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:20:47.942 07:29:26 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:20:47.942 07:29:26 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:20:49.315 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:20:49.315 07:29:27 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:20:49.315 07:29:27 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:20:49.315 07:29:27 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:20:49.315 07:29:27 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:20:49.315 07:29:27 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:20:49.315 07:29:27 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:20:49.315 07:29:27 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:20:49.315 07:29:27 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:20:49.315 07:29:27 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:20:49.315 07:29:27 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:55.942 00:20:55.942 real 0m7.032s 00:20:55.942 user 0m0.752s 00:20:55.942 sys 0m1.320s 00:20:55.942 ************************************ 00:20:55.942 END TEST denied 00:20:55.942 ************************************ 00:20:55.942 07:29:33 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:55.942 07:29:33 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:20:55.942 07:29:33 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:20:55.942 07:29:33 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:20:55.942 07:29:33 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:55.942 07:29:33 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:55.942 07:29:33 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:20:55.942 ************************************ 00:20:55.942 START TEST allowed 00:20:55.942 ************************************ 00:20:55.942 07:29:33 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:20:55.942 07:29:33 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:20:55.942 07:29:33 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:20:55.942 07:29:33 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:20:55.942 07:29:33 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:20:55.942 07:29:33 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:20:56.200 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:56.200 07:29:34 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:20:56.200 07:29:34 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:20:56.200 07:29:34 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:20:56.200 07:29:34 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:20:56.200 07:29:34 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:20:56.200 07:29:34 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:20:56.200 07:29:34 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:20:56.200 07:29:34 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:20:56.200 07:29:34 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:20:56.200 07:29:34 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:20:56.200 07:29:34 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:20:56.200 07:29:34 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:20:56.200 07:29:34 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:20:56.200 07:29:34 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:20:56.200 07:29:34 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:20:56.200 07:29:34 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:20:56.200 07:29:34 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:20:56.200 07:29:34 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:20:56.200 07:29:34 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:20:56.200 07:29:34 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:57.574 00:20:57.574 real 0m2.180s 00:20:57.574 user 0m0.993s 00:20:57.574 sys 0m1.181s 00:20:57.574 07:29:35 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:57.574 07:29:35 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:20:57.574 ************************************ 00:20:57.574 END TEST allowed 00:20:57.574 ************************************ 00:20:57.574 07:29:35 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:20:57.574 00:20:57.574 real 0m11.835s 00:20:57.574 user 0m2.960s 00:20:57.574 sys 0m3.919s 00:20:57.574 07:29:35 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:57.574 07:29:35 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:20:57.574 ************************************ 00:20:57.574 END TEST acl 00:20:57.574 ************************************ 00:20:57.574 07:29:35 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:20:57.574 07:29:35 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:20:57.574 07:29:35 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:57.574 07:29:35 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:57.574 07:29:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:20:57.574 ************************************ 00:20:57.574 START TEST hugepages 00:20:57.574 ************************************ 00:20:57.574 07:29:35 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:20:57.574 * Looking for test storage... 00:20:57.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:20:57.574 07:29:35 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:20:57.574 07:29:35 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:20:57.574 07:29:35 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:20:57.574 07:29:35 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:20:57.574 07:29:35 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:20:57.574 07:29:35 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:20:57.574 07:29:35 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:20:57.574 07:29:35 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:20:57.574 07:29:35 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:20:57.574 07:29:35 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:20:57.574 07:29:35 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:57.574 07:29:35 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:57.574 07:29:35 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:57.574 07:29:35 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:20:57.574 07:29:35 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:57.574 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.574 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 5803996 kB' 'MemAvailable: 7387520 kB' 'Buffers: 2436 kB' 'Cached: 1796844 kB' 'SwapCached: 0 kB' 'Active: 444044 kB' 'Inactive: 1456784 kB' 'Active(anon): 112060 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 103468 kB' 'Mapped: 48468 kB' 'Shmem: 10512 kB' 'KReclaimable: 63388 kB' 'Slab: 136300 kB' 'SReclaimable: 63388 kB' 'SUnreclaim: 72912 kB' 'KernelStack: 6380 kB' 'PageTables: 4012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 326816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.575 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:20:57.576 07:29:35 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:20:57.577 07:29:35 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:20:57.577 07:29:35 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:20:57.577 07:29:35 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:20:57.577 07:29:35 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:20:57.577 07:29:35 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:20:57.577 07:29:35 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:20:57.577 07:29:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:20:57.577 07:29:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:20:57.577 07:29:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:20:57.577 07:29:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:20:57.577 07:29:35 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:20:57.577 07:29:35 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:20:57.577 07:29:35 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:20:57.577 07:29:35 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:57.577 07:29:35 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:57.577 07:29:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:20:57.577 ************************************ 00:20:57.577 START TEST default_setup 00:20:57.577 ************************************ 00:20:57.577 07:29:35 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:20:57.577 07:29:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:20:57.577 07:29:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:20:57.577 07:29:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:20:57.577 07:29:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:20:57.577 07:29:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:20:57.577 07:29:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:20:57.577 07:29:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:20:57.577 07:29:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:20:57.577 07:29:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:20:57.577 07:29:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:20:57.577 07:29:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:20:57.577 07:29:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:20:57.577 07:29:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:20:57.577 07:29:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:20:57.577 07:29:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:20:57.577 07:29:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:20:57.577 07:29:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:20:57.577 07:29:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:20:57.577 07:29:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:20:57.577 07:29:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:20:57.577 07:29:35 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:20:57.577 07:29:35 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:58.144 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:58.715 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:20:58.715 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:58.715 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:58.715 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7876520 kB' 'MemAvailable: 9459804 kB' 'Buffers: 2436 kB' 'Cached: 1796832 kB' 'SwapCached: 0 kB' 'Active: 463408 kB' 'Inactive: 1456808 kB' 'Active(anon): 131424 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456808 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 122556 kB' 'Mapped: 48660 kB' 'Shmem: 10472 kB' 'KReclaimable: 62864 kB' 'Slab: 135584 kB' 'SReclaimable: 62864 kB' 'SUnreclaim: 72720 kB' 'KernelStack: 6464 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.715 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7876020 kB' 'MemAvailable: 9459304 kB' 'Buffers: 2436 kB' 'Cached: 1796832 kB' 'SwapCached: 0 kB' 'Active: 462980 kB' 'Inactive: 1456808 kB' 'Active(anon): 130996 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456808 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 122084 kB' 'Mapped: 48472 kB' 'Shmem: 10472 kB' 'KReclaimable: 62864 kB' 'Slab: 135592 kB' 'SReclaimable: 62864 kB' 'SUnreclaim: 72728 kB' 'KernelStack: 6368 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.716 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.717 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7876020 kB' 'MemAvailable: 9459304 kB' 'Buffers: 2436 kB' 'Cached: 1796832 kB' 'SwapCached: 0 kB' 'Active: 462920 kB' 'Inactive: 1456808 kB' 'Active(anon): 130936 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456808 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 122024 kB' 'Mapped: 48472 kB' 'Shmem: 10472 kB' 'KReclaimable: 62864 kB' 'Slab: 135592 kB' 'SReclaimable: 62864 kB' 'SUnreclaim: 72728 kB' 'KernelStack: 6336 kB' 'PageTables: 4024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.718 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.719 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:20:58.720 nr_hugepages=1024 00:20:58.720 resv_hugepages=0 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:20:58.720 surplus_hugepages=0 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:20:58.720 anon_hugepages=0 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7875516 kB' 'MemAvailable: 9458800 kB' 'Buffers: 2436 kB' 'Cached: 1796832 kB' 'SwapCached: 0 kB' 'Active: 462904 kB' 'Inactive: 1456808 kB' 'Active(anon): 130920 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456808 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 122008 kB' 'Mapped: 48472 kB' 'Shmem: 10472 kB' 'KReclaimable: 62864 kB' 'Slab: 135592 kB' 'SReclaimable: 62864 kB' 'SUnreclaim: 72728 kB' 'KernelStack: 6320 kB' 'PageTables: 3972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.720 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.721 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7875012 kB' 'MemUsed: 4366960 kB' 'SwapCached: 0 kB' 'Active: 462712 kB' 'Inactive: 1456808 kB' 'Active(anon): 130728 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456808 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'FilePages: 1799268 kB' 'Mapped: 48472 kB' 'AnonPages: 121816 kB' 'Shmem: 10472 kB' 'KernelStack: 6372 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62864 kB' 'Slab: 135592 kB' 'SReclaimable: 62864 kB' 'SUnreclaim: 72728 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.722 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.723 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:20:58.981 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:20:58.981 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:20:58.981 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:58.981 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:20:58.981 07:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:20:58.981 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:20:58.981 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:20:58.981 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:20:58.981 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:20:58.981 node0=1024 expecting 1024 00:20:58.981 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:20:58.981 07:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:20:58.981 00:20:58.981 real 0m1.346s 00:20:58.981 user 0m0.580s 00:20:58.981 sys 0m0.762s 00:20:58.981 07:29:37 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:58.981 07:29:37 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:20:58.981 ************************************ 00:20:58.981 END TEST default_setup 00:20:58.981 ************************************ 00:20:58.981 07:29:37 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:20:58.981 07:29:37 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:20:58.981 07:29:37 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:58.981 07:29:37 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:58.981 07:29:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:20:58.981 ************************************ 00:20:58.981 START TEST per_node_1G_alloc 00:20:58.981 ************************************ 00:20:58.981 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:20:58.981 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:20:58.981 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:20:58.981 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:20:58.981 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:20:58.981 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:20:58.981 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:20:58.981 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:20:58.981 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:20:58.981 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:20:58.981 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:20:58.981 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:20:58.981 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:20:58.981 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:20:58.981 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:20:58.981 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:20:58.981 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:20:58.981 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:20:58.981 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:20:58.981 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:20:58.981 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:20:58.981 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:20:58.981 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:20:58.981 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:20:58.981 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:20:58.981 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:59.238 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:59.502 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:59.502 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:59.502 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:59.502 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8914008 kB' 'MemAvailable: 10497304 kB' 'Buffers: 2436 kB' 'Cached: 1796832 kB' 'SwapCached: 0 kB' 'Active: 463260 kB' 'Inactive: 1456812 kB' 'Active(anon): 131276 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456812 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 122448 kB' 'Mapped: 48592 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135656 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72776 kB' 'KernelStack: 6328 kB' 'PageTables: 3928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.502 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.503 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8913760 kB' 'MemAvailable: 10497056 kB' 'Buffers: 2436 kB' 'Cached: 1796832 kB' 'SwapCached: 0 kB' 'Active: 463604 kB' 'Inactive: 1456812 kB' 'Active(anon): 131620 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456812 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 122552 kB' 'Mapped: 49316 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135704 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72824 kB' 'KernelStack: 6368 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 354160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.504 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:20:59.505 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8913760 kB' 'MemAvailable: 10497052 kB' 'Buffers: 2436 kB' 'Cached: 1796828 kB' 'SwapCached: 0 kB' 'Active: 463620 kB' 'Inactive: 1456808 kB' 'Active(anon): 131636 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456808 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 122848 kB' 'Mapped: 48592 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135704 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72824 kB' 'KernelStack: 6352 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.506 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.507 07:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.507 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:20:59.508 nr_hugepages=512 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:20:59.508 resv_hugepages=0 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:20:59.508 surplus_hugepages=0 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:20:59.508 anon_hugepages=0 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8913760 kB' 'MemAvailable: 10497056 kB' 'Buffers: 2436 kB' 'Cached: 1796832 kB' 'SwapCached: 0 kB' 'Active: 463040 kB' 'Inactive: 1456812 kB' 'Active(anon): 131056 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456812 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 122192 kB' 'Mapped: 48472 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135712 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72832 kB' 'KernelStack: 6368 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.508 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:20:59.509 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8913760 kB' 'MemUsed: 3328212 kB' 'SwapCached: 0 kB' 'Active: 462732 kB' 'Inactive: 1456812 kB' 'Active(anon): 130748 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456812 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'FilePages: 1799268 kB' 'Mapped: 48472 kB' 'AnonPages: 121928 kB' 'Shmem: 10472 kB' 'KernelStack: 6384 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62880 kB' 'Slab: 135712 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72832 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.510 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:20:59.511 node0=512 expecting 512 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:20:59.511 00:20:59.511 real 0m0.691s 00:20:59.511 user 0m0.314s 00:20:59.511 sys 0m0.420s 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:59.511 07:29:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:20:59.511 ************************************ 00:20:59.511 END TEST per_node_1G_alloc 00:20:59.511 ************************************ 00:20:59.511 07:29:38 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:20:59.511 07:29:38 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:20:59.511 07:29:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:59.511 07:29:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:59.511 07:29:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:20:59.769 ************************************ 00:20:59.769 START TEST even_2G_alloc 00:20:59.769 ************************************ 00:20:59.769 07:29:38 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:20:59.769 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:20:59.769 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:20:59.769 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:20:59.769 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:20:59.769 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:20:59.769 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:20:59.769 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:20:59.769 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:20:59.769 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:20:59.769 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:20:59.769 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:20:59.769 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:20:59.769 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:20:59.769 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:20:59.769 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:20:59.770 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:20:59.770 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:20:59.770 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:20:59.770 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:20:59.770 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:20:59.770 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:20:59.770 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:20:59.770 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:20:59.770 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:00.028 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:00.028 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:00.028 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:00.028 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:00.028 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:00.028 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:21:00.028 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:21:00.028 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:21:00.028 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:21:00.028 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:21:00.028 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:21:00.028 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:21:00.028 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7874240 kB' 'MemAvailable: 9457540 kB' 'Buffers: 2436 kB' 'Cached: 1796836 kB' 'SwapCached: 0 kB' 'Active: 463424 kB' 'Inactive: 1456816 kB' 'Active(anon): 131440 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456816 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 122584 kB' 'Mapped: 48624 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135740 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72860 kB' 'KernelStack: 6392 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.293 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7873988 kB' 'MemAvailable: 9457288 kB' 'Buffers: 2436 kB' 'Cached: 1796836 kB' 'SwapCached: 0 kB' 'Active: 463000 kB' 'Inactive: 1456816 kB' 'Active(anon): 131016 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456816 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 122080 kB' 'Mapped: 48472 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135740 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72860 kB' 'KernelStack: 6352 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:21:00.294 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.295 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7873988 kB' 'MemAvailable: 9457288 kB' 'Buffers: 2436 kB' 'Cached: 1796836 kB' 'SwapCached: 0 kB' 'Active: 463200 kB' 'Inactive: 1456816 kB' 'Active(anon): 131216 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456816 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 122280 kB' 'Mapped: 48472 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135740 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72860 kB' 'KernelStack: 6336 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.296 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.297 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:21:00.298 nr_hugepages=1024 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:21:00.298 resv_hugepages=0 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:21:00.298 surplus_hugepages=0 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:21:00.298 anon_hugepages=0 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7873988 kB' 'MemAvailable: 9457288 kB' 'Buffers: 2436 kB' 'Cached: 1796836 kB' 'SwapCached: 0 kB' 'Active: 462952 kB' 'Inactive: 1456816 kB' 'Active(anon): 130968 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456816 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 122032 kB' 'Mapped: 48472 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135740 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72860 kB' 'KernelStack: 6320 kB' 'PageTables: 3968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.298 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.299 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7873988 kB' 'MemUsed: 4367984 kB' 'SwapCached: 0 kB' 'Active: 462952 kB' 'Inactive: 1456816 kB' 'Active(anon): 130968 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456816 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'FilePages: 1799272 kB' 'Mapped: 48472 kB' 'AnonPages: 122032 kB' 'Shmem: 10472 kB' 'KernelStack: 6388 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62880 kB' 'Slab: 135740 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72860 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.300 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:21:00.301 node0=1024 expecting 1024 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:21:00.301 00:21:00.301 real 0m0.641s 00:21:00.301 user 0m0.293s 00:21:00.301 sys 0m0.394s 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:00.301 07:29:38 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:21:00.301 ************************************ 00:21:00.301 END TEST even_2G_alloc 00:21:00.301 ************************************ 00:21:00.301 07:29:38 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:21:00.301 07:29:38 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:21:00.301 07:29:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:00.301 07:29:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:00.301 07:29:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:21:00.301 ************************************ 00:21:00.301 START TEST odd_alloc 00:21:00.301 ************************************ 00:21:00.301 07:29:38 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:21:00.301 07:29:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:21:00.301 07:29:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:21:00.301 07:29:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:21:00.301 07:29:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:21:00.301 07:29:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:21:00.301 07:29:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:21:00.301 07:29:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:21:00.301 07:29:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:21:00.301 07:29:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:21:00.301 07:29:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:21:00.301 07:29:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:21:00.301 07:29:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:21:00.301 07:29:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:21:00.301 07:29:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:21:00.301 07:29:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:21:00.302 07:29:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:21:00.302 07:29:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:21:00.302 07:29:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:21:00.302 07:29:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:21:00.302 07:29:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:21:00.302 07:29:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:21:00.302 07:29:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:21:00.302 07:29:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:21:00.302 07:29:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:00.560 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:00.820 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:00.820 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:00.820 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:00.820 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7883920 kB' 'MemAvailable: 9467220 kB' 'Buffers: 2436 kB' 'Cached: 1796836 kB' 'SwapCached: 0 kB' 'Active: 463312 kB' 'Inactive: 1456816 kB' 'Active(anon): 131328 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456816 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122460 kB' 'Mapped: 48552 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135564 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72684 kB' 'KernelStack: 6424 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 351728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.820 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.821 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7883920 kB' 'MemAvailable: 9467220 kB' 'Buffers: 2436 kB' 'Cached: 1796836 kB' 'SwapCached: 0 kB' 'Active: 463128 kB' 'Inactive: 1456816 kB' 'Active(anon): 131144 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456816 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 121984 kB' 'Mapped: 48560 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135568 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72688 kB' 'KernelStack: 6376 kB' 'PageTables: 3960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 351728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.822 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.823 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7883920 kB' 'MemAvailable: 9467220 kB' 'Buffers: 2436 kB' 'Cached: 1796836 kB' 'SwapCached: 0 kB' 'Active: 463148 kB' 'Inactive: 1456816 kB' 'Active(anon): 131164 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456816 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122268 kB' 'Mapped: 48560 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135568 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72688 kB' 'KernelStack: 6344 kB' 'PageTables: 3856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 351728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.824 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:00.825 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:21:01.087 nr_hugepages=1025 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:21:01.087 resv_hugepages=0 00:21:01.087 surplus_hugepages=0 00:21:01.087 anon_hugepages=0 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7883668 kB' 'MemAvailable: 9466968 kB' 'Buffers: 2436 kB' 'Cached: 1796836 kB' 'SwapCached: 0 kB' 'Active: 462984 kB' 'Inactive: 1456816 kB' 'Active(anon): 131000 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456816 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122104 kB' 'Mapped: 48472 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135580 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72700 kB' 'KernelStack: 6368 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 351728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.087 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.088 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7883668 kB' 'MemUsed: 4358304 kB' 'SwapCached: 0 kB' 'Active: 463052 kB' 'Inactive: 1456816 kB' 'Active(anon): 131068 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456816 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1799272 kB' 'Mapped: 48472 kB' 'AnonPages: 122224 kB' 'Shmem: 10472 kB' 'KernelStack: 6384 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62880 kB' 'Slab: 135576 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72696 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.089 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:21:01.090 node0=1025 expecting 1025 00:21:01.090 ************************************ 00:21:01.090 END TEST odd_alloc 00:21:01.090 ************************************ 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:21:01.090 00:21:01.090 real 0m0.695s 00:21:01.090 user 0m0.327s 00:21:01.090 sys 0m0.390s 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:01.090 07:29:39 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:21:01.090 07:29:39 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:21:01.090 07:29:39 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:21:01.090 07:29:39 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:01.090 07:29:39 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:01.090 07:29:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:21:01.090 ************************************ 00:21:01.090 START TEST custom_alloc 00:21:01.090 ************************************ 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:21:01.090 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:21:01.091 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:21:01.091 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:21:01.091 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:21:01.091 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:21:01.091 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:21:01.091 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:21:01.091 07:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:21:01.091 07:29:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:21:01.091 07:29:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:01.393 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:01.659 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:01.659 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:01.659 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:01.659 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8930288 kB' 'MemAvailable: 10513588 kB' 'Buffers: 2436 kB' 'Cached: 1796836 kB' 'SwapCached: 0 kB' 'Active: 463744 kB' 'Inactive: 1456816 kB' 'Active(anon): 131760 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456816 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122884 kB' 'Mapped: 48608 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135568 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72688 kB' 'KernelStack: 6356 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.659 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.660 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8930288 kB' 'MemAvailable: 10513588 kB' 'Buffers: 2436 kB' 'Cached: 1796836 kB' 'SwapCached: 0 kB' 'Active: 462836 kB' 'Inactive: 1456816 kB' 'Active(anon): 130852 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456816 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122180 kB' 'Mapped: 48480 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135588 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72708 kB' 'KernelStack: 6308 kB' 'PageTables: 4024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.661 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:01.662 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8930288 kB' 'MemAvailable: 10513588 kB' 'Buffers: 2436 kB' 'Cached: 1796836 kB' 'SwapCached: 0 kB' 'Active: 462780 kB' 'Inactive: 1456816 kB' 'Active(anon): 130796 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456816 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122172 kB' 'Mapped: 48476 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135608 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72728 kB' 'KernelStack: 6352 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.663 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:21:01.664 nr_hugepages=512 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:21:01.664 resv_hugepages=0 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:21:01.664 surplus_hugepages=0 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:21:01.664 anon_hugepages=0 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.664 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8930288 kB' 'MemAvailable: 10513588 kB' 'Buffers: 2436 kB' 'Cached: 1796836 kB' 'SwapCached: 0 kB' 'Active: 462828 kB' 'Inactive: 1456816 kB' 'Active(anon): 130844 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456816 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122236 kB' 'Mapped: 48476 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135604 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72724 kB' 'KernelStack: 6384 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.665 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8939164 kB' 'MemUsed: 3302808 kB' 'SwapCached: 0 kB' 'Active: 462780 kB' 'Inactive: 1456816 kB' 'Active(anon): 130796 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456816 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1799272 kB' 'Mapped: 48476 kB' 'AnonPages: 122212 kB' 'Shmem: 10472 kB' 'KernelStack: 6368 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62880 kB' 'Slab: 135604 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.666 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:21:01.667 node0=512 expecting 512 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:21:01.667 00:21:01.667 real 0m0.642s 00:21:01.667 user 0m0.287s 00:21:01.667 sys 0m0.400s 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:01.667 07:29:40 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:21:01.667 ************************************ 00:21:01.667 END TEST custom_alloc 00:21:01.667 ************************************ 00:21:01.667 07:29:40 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:21:01.667 07:29:40 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:21:01.667 07:29:40 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:01.667 07:29:40 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:01.667 07:29:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:21:01.667 ************************************ 00:21:01.667 START TEST no_shrink_alloc 00:21:01.667 ************************************ 00:21:01.667 07:29:40 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:21:01.667 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:21:01.667 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:21:01.667 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:21:01.668 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:21:01.668 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:21:01.668 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:21:01.668 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:21:01.668 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:21:01.668 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:21:01.668 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:21:01.668 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:21:01.668 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:21:01.668 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:21:01.668 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:21:01.668 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:21:01.668 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:21:01.668 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:21:01.668 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:21:01.668 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:21:01.668 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:21:01.668 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:21:01.668 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:02.233 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:02.233 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:02.233 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:02.233 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:02.233 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7892928 kB' 'MemAvailable: 9476228 kB' 'Buffers: 2436 kB' 'Cached: 1796836 kB' 'SwapCached: 0 kB' 'Active: 463280 kB' 'Inactive: 1456816 kB' 'Active(anon): 131296 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456816 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122844 kB' 'Mapped: 48560 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135608 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72728 kB' 'KernelStack: 6372 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.233 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.234 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7892928 kB' 'MemAvailable: 9476228 kB' 'Buffers: 2436 kB' 'Cached: 1796836 kB' 'SwapCached: 0 kB' 'Active: 463004 kB' 'Inactive: 1456816 kB' 'Active(anon): 131020 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456816 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122656 kB' 'Mapped: 48672 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135612 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72732 kB' 'KernelStack: 6388 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.235 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.497 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7892928 kB' 'MemAvailable: 9476228 kB' 'Buffers: 2436 kB' 'Cached: 1796836 kB' 'SwapCached: 0 kB' 'Active: 463004 kB' 'Inactive: 1456816 kB' 'Active(anon): 131020 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456816 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122624 kB' 'Mapped: 48672 kB' 'Shmem: 10472 kB' 'KReclaimable: 62880 kB' 'Slab: 135612 kB' 'SReclaimable: 62880 kB' 'SUnreclaim: 72732 kB' 'KernelStack: 6372 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.498 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.499 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:21:02.500 nr_hugepages=1024 00:21:02.500 resv_hugepages=0 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:21:02.500 surplus_hugepages=0 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:21:02.500 anon_hugepages=0 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7893836 kB' 'MemAvailable: 9477136 kB' 'Buffers: 2436 kB' 'Cached: 1796836 kB' 'SwapCached: 0 kB' 'Active: 459300 kB' 'Inactive: 1456816 kB' 'Active(anon): 127316 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456816 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118648 kB' 'Mapped: 47892 kB' 'Shmem: 10472 kB' 'KReclaimable: 62876 kB' 'Slab: 135544 kB' 'SReclaimable: 62876 kB' 'SUnreclaim: 72668 kB' 'KernelStack: 6256 kB' 'PageTables: 3988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.500 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.501 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7893836 kB' 'MemUsed: 4348136 kB' 'SwapCached: 0 kB' 'Active: 459272 kB' 'Inactive: 1456816 kB' 'Active(anon): 127288 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456816 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 1799272 kB' 'Mapped: 47892 kB' 'AnonPages: 118920 kB' 'Shmem: 10472 kB' 'KernelStack: 6256 kB' 'PageTables: 3988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62876 kB' 'Slab: 135544 kB' 'SReclaimable: 62876 kB' 'SUnreclaim: 72668 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.502 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:21:02.503 node0=1024 expecting 1024 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:21:02.503 07:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:02.761 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:03.022 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:03.022 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:03.022 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:03.023 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:03.023 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7897900 kB' 'MemAvailable: 9481192 kB' 'Buffers: 2436 kB' 'Cached: 1796836 kB' 'SwapCached: 0 kB' 'Active: 460484 kB' 'Inactive: 1456816 kB' 'Active(anon): 128500 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456816 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 119672 kB' 'Mapped: 48052 kB' 'Shmem: 10472 kB' 'KReclaimable: 62864 kB' 'Slab: 135432 kB' 'SReclaimable: 62864 kB' 'SUnreclaim: 72568 kB' 'KernelStack: 6392 kB' 'PageTables: 3984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.023 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7899712 kB' 'MemAvailable: 9483004 kB' 'Buffers: 2436 kB' 'Cached: 1796836 kB' 'SwapCached: 0 kB' 'Active: 459784 kB' 'Inactive: 1456816 kB' 'Active(anon): 127800 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456816 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118864 kB' 'Mapped: 48016 kB' 'Shmem: 10472 kB' 'KReclaimable: 62864 kB' 'Slab: 135432 kB' 'SReclaimable: 62864 kB' 'SUnreclaim: 72568 kB' 'KernelStack: 6296 kB' 'PageTables: 3664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.024 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.025 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7899800 kB' 'MemAvailable: 9483092 kB' 'Buffers: 2436 kB' 'Cached: 1796836 kB' 'SwapCached: 0 kB' 'Active: 459232 kB' 'Inactive: 1456816 kB' 'Active(anon): 127248 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456816 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118364 kB' 'Mapped: 47736 kB' 'Shmem: 10472 kB' 'KReclaimable: 62864 kB' 'Slab: 135428 kB' 'SReclaimable: 62864 kB' 'SUnreclaim: 72564 kB' 'KernelStack: 6272 kB' 'PageTables: 3728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.026 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.027 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:21:03.028 nr_hugepages=1024 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:21:03.028 resv_hugepages=0 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:21:03.028 surplus_hugepages=0 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:21:03.028 anon_hugepages=0 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7899800 kB' 'MemAvailable: 9483092 kB' 'Buffers: 2436 kB' 'Cached: 1796836 kB' 'SwapCached: 0 kB' 'Active: 459492 kB' 'Inactive: 1456816 kB' 'Active(anon): 127508 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456816 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118624 kB' 'Mapped: 47736 kB' 'Shmem: 10472 kB' 'KReclaimable: 62864 kB' 'Slab: 135428 kB' 'SReclaimable: 62864 kB' 'SUnreclaim: 72564 kB' 'KernelStack: 6272 kB' 'PageTables: 3728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.028 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.029 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7899548 kB' 'MemUsed: 4342424 kB' 'SwapCached: 0 kB' 'Active: 459156 kB' 'Inactive: 1456816 kB' 'Active(anon): 127172 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456816 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 1799272 kB' 'Mapped: 47736 kB' 'AnonPages: 118356 kB' 'Shmem: 10472 kB' 'KernelStack: 6288 kB' 'PageTables: 3776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62864 kB' 'Slab: 135428 kB' 'SReclaimable: 62864 kB' 'SUnreclaim: 72564 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.030 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:21:03.031 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:21:03.032 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:21:03.032 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:21:03.032 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:21:03.032 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:21:03.032 node0=1024 expecting 1024 00:21:03.032 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:21:03.032 07:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:21:03.032 00:21:03.032 real 0m1.369s 00:21:03.032 user 0m0.622s 00:21:03.032 sys 0m0.842s 00:21:03.032 07:29:41 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:03.032 ************************************ 00:21:03.032 END TEST no_shrink_alloc 00:21:03.032 07:29:41 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:21:03.032 ************************************ 00:21:03.288 07:29:41 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:21:03.288 07:29:41 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:21:03.288 07:29:41 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:21:03.288 07:29:41 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:21:03.288 07:29:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:21:03.288 07:29:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:21:03.288 07:29:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:21:03.289 07:29:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:21:03.289 07:29:41 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:21:03.289 07:29:41 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:21:03.289 ************************************ 00:21:03.289 END TEST hugepages 00:21:03.289 ************************************ 00:21:03.289 00:21:03.289 real 0m5.814s 00:21:03.289 user 0m2.569s 00:21:03.289 sys 0m3.477s 00:21:03.289 07:29:41 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:03.289 07:29:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:21:03.289 07:29:41 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:21:03.289 07:29:41 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:21:03.289 07:29:41 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:03.289 07:29:41 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:03.289 07:29:41 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:21:03.289 ************************************ 00:21:03.289 START TEST driver 00:21:03.289 ************************************ 00:21:03.289 07:29:41 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:21:03.289 * Looking for test storage... 00:21:03.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:21:03.289 07:29:41 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:21:03.289 07:29:41 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:21:03.289 07:29:41 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:09.871 07:29:47 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:21:09.871 07:29:47 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:09.871 07:29:47 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:09.871 07:29:47 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:21:09.871 ************************************ 00:21:09.871 START TEST guess_driver 00:21:09.871 ************************************ 00:21:09.871 07:29:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:21:09.871 07:29:47 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:21:09.871 07:29:47 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:21:09.871 07:29:47 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:21:09.871 07:29:47 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:21:09.871 07:29:47 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:21:09.871 07:29:47 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:21:09.871 07:29:47 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:21:09.871 07:29:47 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:21:09.871 07:29:47 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:21:09.871 07:29:47 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:21:09.871 07:29:47 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:21:09.871 07:29:47 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:21:09.871 07:29:47 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:21:09.871 07:29:47 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:21:09.871 07:29:47 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:21:09.871 07:29:47 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:21:09.871 07:29:47 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:21:09.871 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:21:09.871 07:29:47 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:21:09.871 07:29:47 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:21:09.871 07:29:47 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:21:09.871 Looking for driver=uio_pci_generic 00:21:09.871 07:29:47 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:21:09.872 07:29:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:21:09.872 07:29:47 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:21:09.872 07:29:47 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:21:09.872 07:29:47 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:21:09.872 07:29:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:21:09.872 07:29:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:21:09.872 07:29:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:21:10.434 07:29:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:21:10.434 07:29:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:21:10.434 07:29:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:21:10.434 07:29:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:21:10.434 07:29:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:21:10.434 07:29:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:21:10.434 07:29:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:21:10.434 07:29:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:21:10.434 07:29:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:21:10.434 07:29:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:21:10.434 07:29:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:21:10.434 07:29:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:21:10.434 07:29:48 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:21:10.434 07:29:48 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:21:10.434 07:29:48 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:21:10.434 07:29:48 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:17.001 00:21:17.001 real 0m7.173s 00:21:17.001 user 0m0.791s 00:21:17.001 sys 0m1.459s 00:21:17.001 07:29:54 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:17.001 07:29:54 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:21:17.001 ************************************ 00:21:17.001 END TEST guess_driver 00:21:17.001 ************************************ 00:21:17.001 07:29:54 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:21:17.001 00:21:17.001 real 0m13.216s 00:21:17.001 user 0m1.123s 00:21:17.001 sys 0m2.274s 00:21:17.001 07:29:54 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:17.001 07:29:54 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:21:17.001 ************************************ 00:21:17.001 END TEST driver 00:21:17.001 ************************************ 00:21:17.001 07:29:54 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:21:17.001 07:29:54 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:21:17.001 07:29:54 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:17.001 07:29:54 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:17.001 07:29:54 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:21:17.001 ************************************ 00:21:17.001 START TEST devices 00:21:17.001 ************************************ 00:21:17.001 07:29:54 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:21:17.001 * Looking for test storage... 00:21:17.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:21:17.001 07:29:55 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:21:17.001 07:29:55 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:21:17.001 07:29:55 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:21:17.001 07:29:55 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:17.567 07:29:56 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:21:17.567 07:29:56 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:21:17.567 07:29:56 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:21:17.567 07:29:56 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:21:17.567 07:29:56 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:21:17.567 07:29:56 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:21:17.567 07:29:56 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:21:17.567 07:29:56 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:17.567 07:29:56 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:17.567 07:29:56 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:21:17.568 07:29:56 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:21:17.568 07:29:56 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:21:17.568 07:29:56 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:17.568 07:29:56 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:17.568 07:29:56 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:21:17.568 07:29:56 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:21:17.568 07:29:56 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:21:17.568 07:29:56 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:21:17.568 07:29:56 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:17.568 07:29:56 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:21:17.568 07:29:56 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:21:17.568 07:29:56 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:21:17.568 07:29:56 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:21:17.568 07:29:56 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:17.568 07:29:56 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:21:17.568 07:29:56 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:21:17.568 07:29:56 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:21:17.568 07:29:56 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:21:17.568 07:29:56 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:17.568 07:29:56 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:21:17.568 07:29:56 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:21:17.568 07:29:56 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:21:17.568 07:29:56 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:21:17.568 07:29:56 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:17.568 07:29:56 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:21:17.568 07:29:56 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:21:17.568 07:29:56 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:21:17.568 07:29:56 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:21:17.568 07:29:56 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:17.568 07:29:56 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:21:17.568 07:29:56 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:21:17.568 07:29:56 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:21:17.568 07:29:56 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:21:17.568 07:29:56 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:21:17.568 07:29:56 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:21:17.568 07:29:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:21:17.568 07:29:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:21:17.568 07:29:56 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:21:17.568 07:29:56 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:21:17.568 07:29:56 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:21:17.568 07:29:56 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:21:17.568 07:29:56 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:17.828 No valid GPT data, bailing 00:21:17.828 07:29:56 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:17.828 07:29:56 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:21:17.828 07:29:56 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:21:17.828 07:29:56 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:21:17.828 07:29:56 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:21:17.829 07:29:56 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:17.829 07:29:56 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:21:17.829 07:29:56 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:21:17.829 07:29:56 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:17.829 No valid GPT data, bailing 00:21:17.829 07:29:56 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:17.829 07:29:56 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:21:17.829 07:29:56 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:21:17.829 07:29:56 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:21:17.829 07:29:56 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:17.829 07:29:56 setup.sh.devices -- setup/common.sh@80 -- # echo 6343335936 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:21:17.829 07:29:56 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:21:17.829 07:29:56 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:21:17.829 No valid GPT data, bailing 00:21:17.829 07:29:56 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:21:17.829 07:29:56 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:21:17.829 07:29:56 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:21:17.829 07:29:56 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:21:17.829 07:29:56 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:21:17.829 07:29:56 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:21:17.829 07:29:56 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:21:17.829 07:29:56 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:21:17.829 No valid GPT data, bailing 00:21:17.829 07:29:56 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:21:17.829 07:29:56 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:21:17.829 07:29:56 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:21:17.829 07:29:56 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n2 00:21:17.829 07:29:56 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:21:17.829 07:29:56 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:21:17.829 07:29:56 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:21:17.829 07:29:56 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:21:17.829 07:29:56 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:21:18.087 No valid GPT data, bailing 00:21:18.087 07:29:56 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:21:18.087 07:29:56 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:21:18.087 07:29:56 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:21:18.087 07:29:56 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:21:18.087 07:29:56 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n3 00:21:18.087 07:29:56 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:21:18.087 07:29:56 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:21:18.087 07:29:56 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:21:18.087 07:29:56 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:21:18.087 07:29:56 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:21:18.087 07:29:56 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:21:18.087 07:29:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:21:18.087 07:29:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:21:18.087 07:29:56 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:21:18.087 07:29:56 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:21:18.087 07:29:56 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:21:18.087 07:29:56 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:21:18.087 07:29:56 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:21:18.087 No valid GPT data, bailing 00:21:18.087 07:29:56 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:21:18.087 07:29:56 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:21:18.087 07:29:56 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:21:18.087 07:29:56 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:21:18.087 07:29:56 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:21:18.087 07:29:56 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:21:18.087 07:29:56 setup.sh.devices -- setup/common.sh@80 -- # echo 1073741824 00:21:18.087 07:29:56 setup.sh.devices -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:21:18.087 07:29:56 setup.sh.devices -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:21:18.087 07:29:56 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:21:18.087 07:29:56 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:21:18.087 07:29:56 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:18.087 07:29:56 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:18.087 07:29:56 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:21:18.087 ************************************ 00:21:18.087 START TEST nvme_mount 00:21:18.087 ************************************ 00:21:18.087 07:29:56 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:21:18.087 07:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:21:18.087 07:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:21:18.087 07:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:21:18.087 07:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:21:18.087 07:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:21:18.087 07:29:56 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:21:18.088 07:29:56 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:21:18.088 07:29:56 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:21:18.088 07:29:56 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:21:18.088 07:29:56 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:21:18.088 07:29:56 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:21:18.088 07:29:56 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:21:18.088 07:29:56 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:21:18.088 07:29:56 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:21:18.088 07:29:56 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:21:18.088 07:29:56 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:21:18.088 07:29:56 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:21:18.088 07:29:56 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:21:18.088 07:29:56 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:21:19.022 Creating new GPT entries in memory. 00:21:19.022 GPT data structures destroyed! You may now partition the disk using fdisk or 00:21:19.022 other utilities. 00:21:19.022 07:29:57 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:21:19.022 07:29:57 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:21:19.022 07:29:57 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:21:19.022 07:29:57 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:21:19.022 07:29:57 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:21:20.427 Creating new GPT entries in memory. 00:21:20.427 The operation has completed successfully. 00:21:20.427 07:29:58 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:21:20.427 07:29:58 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:21:20.427 07:29:58 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 59497 00:21:20.427 07:29:58 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:21:20.427 07:29:58 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:21:20.427 07:29:58 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:21:20.427 07:29:58 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:21:20.427 07:29:58 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:21:20.427 07:29:58 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:21:20.427 07:29:58 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:21:20.427 07:29:58 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:21:20.427 07:29:58 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:21:20.427 07:29:58 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:21:20.427 07:29:58 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:21:20.427 07:29:58 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:21:20.427 07:29:58 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:21:20.427 07:29:58 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:21:20.427 07:29:58 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:21:20.427 07:29:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:20.427 07:29:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:21:20.427 07:29:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:21:20.427 07:29:58 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:21:20.427 07:29:58 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:21:20.427 07:29:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:20.427 07:29:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:21:20.427 07:29:58 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:21:20.427 07:29:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:20.427 07:29:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:20.427 07:29:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:20.686 07:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:20.686 07:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:20.686 07:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:20.686 07:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:20.686 07:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:20.686 07:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:20.944 07:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:20.944 07:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:21.202 07:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:21:21.202 07:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:21:21.202 07:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:21:21.202 07:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:21:21.202 07:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:21:21.202 07:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:21:21.202 07:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:21:21.202 07:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:21:21.202 07:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:21:21.202 07:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:21:21.202 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:21:21.202 07:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:21:21.202 07:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:21:21.461 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:21:21.461 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:21:21.461 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:21:21.461 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:21:21.461 07:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:21:21.461 07:29:59 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:21:21.461 07:29:59 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:21:21.461 07:29:59 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:21:21.461 07:29:59 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:21:21.461 07:29:59 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:21:21.461 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:21:21.461 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:21:21.461 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:21:21.461 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:21:21.461 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:21:21.461 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:21:21.461 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:21:21.461 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:21:21.461 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:21:21.461 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:21.461 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:21:21.461 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:21:21.461 07:30:00 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:21:21.461 07:30:00 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:21:21.719 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:21.719 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:21:21.719 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:21:21.719 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:21.719 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:21.719 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:21.978 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:21.978 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:21.978 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:21.978 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:21.978 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:21.978 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:22.237 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:22.237 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:22.495 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:21:22.495 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:21:22.495 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:21:22.495 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:21:22.495 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:21:22.495 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:21:22.495 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:21:22.495 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:21:22.495 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:21:22.495 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:21:22.495 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:21:22.495 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:21:22.495 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:21:22.495 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:21:22.495 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:22.495 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:21:22.495 07:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:21:22.495 07:30:00 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:21:22.495 07:30:00 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:21:22.753 07:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:22.753 07:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:21:22.753 07:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:21:22.753 07:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:22.753 07:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:22.753 07:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:23.011 07:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:23.011 07:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:23.011 07:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:23.011 07:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:23.011 07:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:23.011 07:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:23.301 07:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:23.301 07:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:23.558 07:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:21:23.558 07:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:21:23.558 07:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:21:23.558 07:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:21:23.558 07:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:21:23.558 07:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:21:23.558 07:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:21:23.558 07:30:01 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:21:23.558 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:21:23.558 00:21:23.558 real 0m5.389s 00:21:23.558 user 0m1.416s 00:21:23.558 sys 0m1.663s 00:21:23.558 07:30:01 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:23.558 07:30:01 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:21:23.558 ************************************ 00:21:23.558 END TEST nvme_mount 00:21:23.558 ************************************ 00:21:23.558 07:30:02 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:21:23.558 07:30:02 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:21:23.558 07:30:02 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:23.558 07:30:02 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:23.558 07:30:02 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:21:23.558 ************************************ 00:21:23.558 START TEST dm_mount 00:21:23.558 ************************************ 00:21:23.558 07:30:02 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:21:23.558 07:30:02 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:21:23.558 07:30:02 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:21:23.558 07:30:02 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:21:23.558 07:30:02 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:21:23.558 07:30:02 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:21:23.558 07:30:02 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:21:23.558 07:30:02 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:21:23.558 07:30:02 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:21:23.558 07:30:02 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:21:23.558 07:30:02 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:21:23.558 07:30:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:21:23.558 07:30:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:21:23.558 07:30:02 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:21:23.558 07:30:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:21:23.558 07:30:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:21:23.558 07:30:02 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:21:23.558 07:30:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:21:23.558 07:30:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:21:23.558 07:30:02 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:21:23.558 07:30:02 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:21:23.558 07:30:02 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:21:24.489 Creating new GPT entries in memory. 00:21:24.489 GPT data structures destroyed! You may now partition the disk using fdisk or 00:21:24.489 other utilities. 00:21:24.489 07:30:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:21:24.489 07:30:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:21:24.489 07:30:03 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:21:24.489 07:30:03 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:21:24.489 07:30:03 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:21:25.944 Creating new GPT entries in memory. 00:21:25.944 The operation has completed successfully. 00:21:25.944 07:30:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:21:25.944 07:30:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:21:25.944 07:30:04 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:21:25.944 07:30:04 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:21:25.944 07:30:04 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:21:26.876 The operation has completed successfully. 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 60122 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:26.876 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:27.134 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:27.134 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:27.134 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:27.134 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:27.134 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:27.134 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:27.392 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:27.392 07:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:27.649 07:30:06 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:21:27.650 07:30:06 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:21:27.650 07:30:06 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:21:27.650 07:30:06 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:21:27.650 07:30:06 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:21:27.650 07:30:06 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:21:27.650 07:30:06 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:21:27.650 07:30:06 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:21:27.650 07:30:06 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:21:27.650 07:30:06 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:21:27.650 07:30:06 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:21:27.650 07:30:06 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:21:27.650 07:30:06 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:21:27.650 07:30:06 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:21:27.650 07:30:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:21:27.650 07:30:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:27.650 07:30:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:21:27.650 07:30:06 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:21:27.650 07:30:06 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:21:27.908 07:30:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:27.908 07:30:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:21:27.908 07:30:06 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:21:27.908 07:30:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:27.908 07:30:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:27.908 07:30:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:28.166 07:30:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:28.166 07:30:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:28.166 07:30:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:28.166 07:30:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:28.166 07:30:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:28.166 07:30:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:28.424 07:30:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:28.424 07:30:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:21:28.682 07:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:21:28.682 07:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:21:28.682 07:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:21:28.682 07:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:21:28.682 07:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:21:28.682 07:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:21:28.682 07:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:21:28.682 07:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:21:28.682 07:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:21:28.682 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:21:28.682 07:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:21:28.682 07:30:07 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:21:28.682 00:21:28.682 real 0m5.224s 00:21:28.682 user 0m0.960s 00:21:28.682 sys 0m1.172s 00:21:28.682 07:30:07 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:28.682 ************************************ 00:21:28.682 END TEST dm_mount 00:21:28.682 07:30:07 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:21:28.682 ************************************ 00:21:28.939 07:30:07 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:21:28.939 07:30:07 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:21:28.939 07:30:07 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:21:28.939 07:30:07 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:21:28.939 07:30:07 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:21:28.939 07:30:07 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:21:28.939 07:30:07 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:21:28.939 07:30:07 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:21:29.197 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:21:29.197 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:21:29.197 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:21:29.197 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:21:29.197 07:30:07 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:21:29.197 07:30:07 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:21:29.197 07:30:07 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:21:29.197 07:30:07 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:21:29.197 07:30:07 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:21:29.197 07:30:07 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:21:29.197 07:30:07 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:21:29.197 00:21:29.197 real 0m12.627s 00:21:29.197 user 0m3.254s 00:21:29.197 sys 0m3.663s 00:21:29.197 07:30:07 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:29.197 07:30:07 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:21:29.197 ************************************ 00:21:29.197 END TEST devices 00:21:29.197 ************************************ 00:21:29.197 07:30:07 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:21:29.197 00:21:29.197 real 0m43.776s 00:21:29.197 user 0m10.008s 00:21:29.197 sys 0m13.508s 00:21:29.197 ************************************ 00:21:29.197 END TEST setup.sh 00:21:29.197 ************************************ 00:21:29.197 07:30:07 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:29.197 07:30:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:21:29.197 07:30:07 -- common/autotest_common.sh@1142 -- # return 0 00:21:29.197 07:30:07 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:21:29.762 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:30.327 Hugepages 00:21:30.327 node hugesize free / total 00:21:30.327 node0 1048576kB 0 / 0 00:21:30.327 node0 2048kB 2048 / 2048 00:21:30.327 00:21:30.327 Type BDF Vendor Device NUMA Driver Device Block devices 00:21:30.327 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:21:30.327 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:21:30.327 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:21:30.585 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:21:30.585 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:21:30.585 07:30:09 -- spdk/autotest.sh@130 -- # uname -s 00:21:30.585 07:30:09 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:21:30.585 07:30:09 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:21:30.585 07:30:09 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:31.151 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:31.716 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:21:31.716 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:31.716 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:31.716 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:21:31.716 07:30:10 -- common/autotest_common.sh@1532 -- # sleep 1 00:21:33.087 07:30:11 -- common/autotest_common.sh@1533 -- # bdfs=() 00:21:33.087 07:30:11 -- common/autotest_common.sh@1533 -- # local bdfs 00:21:33.087 07:30:11 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:21:33.087 07:30:11 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:21:33.087 07:30:11 -- common/autotest_common.sh@1513 -- # bdfs=() 00:21:33.087 07:30:11 -- common/autotest_common.sh@1513 -- # local bdfs 00:21:33.088 07:30:11 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:33.088 07:30:11 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:33.088 07:30:11 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:21:33.088 07:30:11 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:21:33.088 07:30:11 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:21:33.088 07:30:11 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:33.088 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:33.345 Waiting for block devices as requested 00:21:33.345 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:33.601 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:33.601 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:21:33.601 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:21:38.887 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:21:38.887 07:30:17 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:21:38.887 07:30:17 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:21:38.887 07:30:17 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:21:38.887 07:30:17 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:21:38.887 07:30:17 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:21:38.887 07:30:17 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:21:38.887 07:30:17 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:21:38.887 07:30:17 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:21:38.887 07:30:17 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:21:38.887 07:30:17 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:21:38.887 07:30:17 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:21:38.887 07:30:17 -- common/autotest_common.sh@1545 -- # grep oacs 00:21:38.887 07:30:17 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:21:38.887 07:30:17 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:21:38.887 07:30:17 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:21:38.887 07:30:17 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:21:38.887 07:30:17 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:21:38.887 07:30:17 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:21:38.887 07:30:17 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:21:38.887 07:30:17 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:21:38.887 07:30:17 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:21:38.887 07:30:17 -- common/autotest_common.sh@1557 -- # continue 00:21:38.887 07:30:17 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:21:38.887 07:30:17 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:21:38.887 07:30:17 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:21:38.887 07:30:17 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:21:38.887 07:30:17 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:21:38.887 07:30:17 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:21:38.887 07:30:17 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:21:38.887 07:30:17 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:21:38.887 07:30:17 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:21:38.887 07:30:17 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:21:38.887 07:30:17 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:21:38.887 07:30:17 -- common/autotest_common.sh@1545 -- # grep oacs 00:21:38.887 07:30:17 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:21:38.887 07:30:17 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:21:38.887 07:30:17 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:21:38.887 07:30:17 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:21:38.887 07:30:17 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:21:38.887 07:30:17 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:21:38.887 07:30:17 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:21:38.887 07:30:17 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:21:38.887 07:30:17 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:21:38.887 07:30:17 -- common/autotest_common.sh@1557 -- # continue 00:21:38.887 07:30:17 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:21:38.887 07:30:17 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:21:38.887 07:30:17 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:21:38.887 07:30:17 -- common/autotest_common.sh@1502 -- # grep 0000:00:12.0/nvme/nvme 00:21:38.887 07:30:17 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:21:38.887 07:30:17 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:21:38.887 07:30:17 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:21:38.887 07:30:17 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme2 00:21:38.887 07:30:17 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme2 00:21:38.887 07:30:17 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme2 ]] 00:21:38.887 07:30:17 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme2 00:21:38.887 07:30:17 -- common/autotest_common.sh@1545 -- # grep oacs 00:21:38.887 07:30:17 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:21:38.887 07:30:17 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:21:38.887 07:30:17 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:21:38.887 07:30:17 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:21:38.887 07:30:17 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme2 00:21:38.887 07:30:17 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:21:38.887 07:30:17 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:21:38.887 07:30:17 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:21:38.887 07:30:17 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:21:38.887 07:30:17 -- common/autotest_common.sh@1557 -- # continue 00:21:38.887 07:30:17 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:21:38.887 07:30:17 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:21:38.887 07:30:17 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:21:38.887 07:30:17 -- common/autotest_common.sh@1502 -- # grep 0000:00:13.0/nvme/nvme 00:21:38.887 07:30:17 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:21:38.887 07:30:17 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:21:38.887 07:30:17 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:21:38.887 07:30:17 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme3 00:21:38.887 07:30:17 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme3 00:21:38.887 07:30:17 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme3 ]] 00:21:38.887 07:30:17 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme3 00:21:38.887 07:30:17 -- common/autotest_common.sh@1545 -- # grep oacs 00:21:38.887 07:30:17 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:21:38.887 07:30:17 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:21:38.887 07:30:17 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:21:38.887 07:30:17 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:21:38.887 07:30:17 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme3 00:21:38.887 07:30:17 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:21:38.887 07:30:17 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:21:38.887 07:30:17 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:21:38.887 07:30:17 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:21:38.887 07:30:17 -- common/autotest_common.sh@1557 -- # continue 00:21:38.887 07:30:17 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:21:38.887 07:30:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:38.887 07:30:17 -- common/autotest_common.sh@10 -- # set +x 00:21:38.887 07:30:17 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:21:38.887 07:30:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:38.887 07:30:17 -- common/autotest_common.sh@10 -- # set +x 00:21:38.887 07:30:17 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:39.457 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:40.022 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:40.022 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:21:40.022 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:40.280 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:21:40.280 07:30:18 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:21:40.280 07:30:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:40.280 07:30:18 -- common/autotest_common.sh@10 -- # set +x 00:21:40.280 07:30:18 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:21:40.280 07:30:18 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:21:40.280 07:30:18 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:21:40.280 07:30:18 -- common/autotest_common.sh@1577 -- # bdfs=() 00:21:40.280 07:30:18 -- common/autotest_common.sh@1577 -- # local bdfs 00:21:40.280 07:30:18 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:21:40.280 07:30:18 -- common/autotest_common.sh@1513 -- # bdfs=() 00:21:40.280 07:30:18 -- common/autotest_common.sh@1513 -- # local bdfs 00:21:40.280 07:30:18 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:40.280 07:30:18 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:40.280 07:30:18 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:21:40.280 07:30:18 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:21:40.280 07:30:18 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:21:40.280 07:30:18 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:21:40.280 07:30:18 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:21:40.280 07:30:18 -- common/autotest_common.sh@1580 -- # device=0x0010 00:21:40.280 07:30:18 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:21:40.280 07:30:18 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:21:40.280 07:30:18 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:21:40.280 07:30:18 -- common/autotest_common.sh@1580 -- # device=0x0010 00:21:40.280 07:30:18 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:21:40.280 07:30:18 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:21:40.280 07:30:18 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:21:40.280 07:30:18 -- common/autotest_common.sh@1580 -- # device=0x0010 00:21:40.280 07:30:18 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:21:40.280 07:30:18 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:21:40.280 07:30:18 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:21:40.280 07:30:18 -- common/autotest_common.sh@1580 -- # device=0x0010 00:21:40.280 07:30:18 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:21:40.280 07:30:18 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:21:40.280 07:30:18 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:21:40.280 07:30:18 -- common/autotest_common.sh@1593 -- # return 0 00:21:40.280 07:30:18 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:21:40.280 07:30:18 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:21:40.280 07:30:18 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:21:40.280 07:30:18 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:21:40.280 07:30:18 -- spdk/autotest.sh@162 -- # timing_enter lib 00:21:40.280 07:30:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:40.280 07:30:18 -- common/autotest_common.sh@10 -- # set +x 00:21:40.280 07:30:18 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:21:40.280 07:30:18 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:21:40.280 07:30:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:40.280 07:30:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:40.280 07:30:18 -- common/autotest_common.sh@10 -- # set +x 00:21:40.280 ************************************ 00:21:40.280 START TEST env 00:21:40.280 ************************************ 00:21:40.280 07:30:18 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:21:40.539 * Looking for test storage... 00:21:40.539 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:21:40.539 07:30:18 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:21:40.539 07:30:18 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:40.539 07:30:18 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:40.539 07:30:18 env -- common/autotest_common.sh@10 -- # set +x 00:21:40.539 ************************************ 00:21:40.539 START TEST env_memory 00:21:40.539 ************************************ 00:21:40.539 07:30:18 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:21:40.539 00:21:40.539 00:21:40.539 CUnit - A unit testing framework for C - Version 2.1-3 00:21:40.539 http://cunit.sourceforge.net/ 00:21:40.539 00:21:40.539 00:21:40.539 Suite: memory 00:21:40.539 Test: alloc and free memory map ...[2024-07-15 07:30:19.046104] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:21:40.539 passed 00:21:40.539 Test: mem map translation ...[2024-07-15 07:30:19.096200] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:21:40.539 [2024-07-15 07:30:19.096545] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:21:40.539 [2024-07-15 07:30:19.096824] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:21:40.539 [2024-07-15 07:30:19.097009] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:21:40.796 passed 00:21:40.797 Test: mem map registration ...[2024-07-15 07:30:19.174665] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:21:40.797 [2024-07-15 07:30:19.174943] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:21:40.797 passed 00:21:40.797 Test: mem map adjacent registrations ...passed 00:21:40.797 00:21:40.797 Run Summary: Type Total Ran Passed Failed Inactive 00:21:40.797 suites 1 1 n/a 0 0 00:21:40.797 tests 4 4 4 0 0 00:21:40.797 asserts 152 152 152 0 n/a 00:21:40.797 00:21:40.797 Elapsed time = 0.270 seconds 00:21:40.797 00:21:40.797 real 0m0.318s 00:21:40.797 user 0m0.282s 00:21:40.797 sys 0m0.028s 00:21:40.797 07:30:19 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:40.797 07:30:19 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:21:40.797 ************************************ 00:21:40.797 END TEST env_memory 00:21:40.797 ************************************ 00:21:40.797 07:30:19 env -- common/autotest_common.sh@1142 -- # return 0 00:21:40.797 07:30:19 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:21:40.797 07:30:19 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:40.797 07:30:19 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:40.797 07:30:19 env -- common/autotest_common.sh@10 -- # set +x 00:21:40.797 ************************************ 00:21:40.797 START TEST env_vtophys 00:21:40.797 ************************************ 00:21:40.797 07:30:19 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:21:40.797 EAL: lib.eal log level changed from notice to debug 00:21:40.797 EAL: Detected lcore 0 as core 0 on socket 0 00:21:40.797 EAL: Detected lcore 1 as core 0 on socket 0 00:21:40.797 EAL: Detected lcore 2 as core 0 on socket 0 00:21:40.797 EAL: Detected lcore 3 as core 0 on socket 0 00:21:40.797 EAL: Detected lcore 4 as core 0 on socket 0 00:21:40.797 EAL: Detected lcore 5 as core 0 on socket 0 00:21:40.797 EAL: Detected lcore 6 as core 0 on socket 0 00:21:40.797 EAL: Detected lcore 7 as core 0 on socket 0 00:21:40.797 EAL: Detected lcore 8 as core 0 on socket 0 00:21:40.797 EAL: Detected lcore 9 as core 0 on socket 0 00:21:41.055 EAL: Maximum logical cores by configuration: 128 00:21:41.055 EAL: Detected CPU lcores: 10 00:21:41.055 EAL: Detected NUMA nodes: 1 00:21:41.055 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:21:41.055 EAL: Detected shared linkage of DPDK 00:21:41.055 EAL: No shared files mode enabled, IPC will be disabled 00:21:41.055 EAL: Selected IOVA mode 'PA' 00:21:41.055 EAL: Probing VFIO support... 00:21:41.055 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:21:41.055 EAL: VFIO modules not loaded, skipping VFIO support... 00:21:41.055 EAL: Ask a virtual area of 0x2e000 bytes 00:21:41.055 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:21:41.055 EAL: Setting up physically contiguous memory... 00:21:41.055 EAL: Setting maximum number of open files to 524288 00:21:41.055 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:21:41.055 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:21:41.055 EAL: Ask a virtual area of 0x61000 bytes 00:21:41.055 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:21:41.055 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:21:41.055 EAL: Ask a virtual area of 0x400000000 bytes 00:21:41.055 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:21:41.055 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:21:41.055 EAL: Ask a virtual area of 0x61000 bytes 00:21:41.055 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:21:41.055 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:21:41.055 EAL: Ask a virtual area of 0x400000000 bytes 00:21:41.055 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:21:41.055 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:21:41.055 EAL: Ask a virtual area of 0x61000 bytes 00:21:41.055 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:21:41.055 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:21:41.055 EAL: Ask a virtual area of 0x400000000 bytes 00:21:41.055 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:21:41.055 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:21:41.055 EAL: Ask a virtual area of 0x61000 bytes 00:21:41.055 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:21:41.055 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:21:41.055 EAL: Ask a virtual area of 0x400000000 bytes 00:21:41.055 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:21:41.055 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:21:41.055 EAL: Hugepages will be freed exactly as allocated. 00:21:41.055 EAL: No shared files mode enabled, IPC is disabled 00:21:41.055 EAL: No shared files mode enabled, IPC is disabled 00:21:41.055 EAL: TSC frequency is ~2200000 KHz 00:21:41.055 EAL: Main lcore 0 is ready (tid=7f806ce0ea40;cpuset=[0]) 00:21:41.055 EAL: Trying to obtain current memory policy. 00:21:41.055 EAL: Setting policy MPOL_PREFERRED for socket 0 00:21:41.055 EAL: Restoring previous memory policy: 0 00:21:41.055 EAL: request: mp_malloc_sync 00:21:41.055 EAL: No shared files mode enabled, IPC is disabled 00:21:41.055 EAL: Heap on socket 0 was expanded by 2MB 00:21:41.055 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:21:41.055 EAL: No PCI address specified using 'addr=' in: bus=pci 00:21:41.055 EAL: Mem event callback 'spdk:(nil)' registered 00:21:41.055 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:21:41.055 00:21:41.055 00:21:41.055 CUnit - A unit testing framework for C - Version 2.1-3 00:21:41.055 http://cunit.sourceforge.net/ 00:21:41.055 00:21:41.055 00:21:41.055 Suite: components_suite 00:21:41.621 Test: vtophys_malloc_test ...passed 00:21:41.621 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:21:41.621 EAL: Setting policy MPOL_PREFERRED for socket 0 00:21:41.621 EAL: Restoring previous memory policy: 4 00:21:41.621 EAL: Calling mem event callback 'spdk:(nil)' 00:21:41.621 EAL: request: mp_malloc_sync 00:21:41.621 EAL: No shared files mode enabled, IPC is disabled 00:21:41.621 EAL: Heap on socket 0 was expanded by 4MB 00:21:41.621 EAL: Calling mem event callback 'spdk:(nil)' 00:21:41.621 EAL: request: mp_malloc_sync 00:21:41.621 EAL: No shared files mode enabled, IPC is disabled 00:21:41.621 EAL: Heap on socket 0 was shrunk by 4MB 00:21:41.621 EAL: Trying to obtain current memory policy. 00:21:41.621 EAL: Setting policy MPOL_PREFERRED for socket 0 00:21:41.621 EAL: Restoring previous memory policy: 4 00:21:41.621 EAL: Calling mem event callback 'spdk:(nil)' 00:21:41.621 EAL: request: mp_malloc_sync 00:21:41.621 EAL: No shared files mode enabled, IPC is disabled 00:21:41.621 EAL: Heap on socket 0 was expanded by 6MB 00:21:41.621 EAL: Calling mem event callback 'spdk:(nil)' 00:21:41.621 EAL: request: mp_malloc_sync 00:21:41.621 EAL: No shared files mode enabled, IPC is disabled 00:21:41.621 EAL: Heap on socket 0 was shrunk by 6MB 00:21:41.621 EAL: Trying to obtain current memory policy. 00:21:41.621 EAL: Setting policy MPOL_PREFERRED for socket 0 00:21:41.621 EAL: Restoring previous memory policy: 4 00:21:41.621 EAL: Calling mem event callback 'spdk:(nil)' 00:21:41.621 EAL: request: mp_malloc_sync 00:21:41.621 EAL: No shared files mode enabled, IPC is disabled 00:21:41.621 EAL: Heap on socket 0 was expanded by 10MB 00:21:41.621 EAL: Calling mem event callback 'spdk:(nil)' 00:21:41.621 EAL: request: mp_malloc_sync 00:21:41.621 EAL: No shared files mode enabled, IPC is disabled 00:21:41.621 EAL: Heap on socket 0 was shrunk by 10MB 00:21:41.621 EAL: Trying to obtain current memory policy. 00:21:41.621 EAL: Setting policy MPOL_PREFERRED for socket 0 00:21:41.621 EAL: Restoring previous memory policy: 4 00:21:41.621 EAL: Calling mem event callback 'spdk:(nil)' 00:21:41.621 EAL: request: mp_malloc_sync 00:21:41.621 EAL: No shared files mode enabled, IPC is disabled 00:21:41.621 EAL: Heap on socket 0 was expanded by 18MB 00:21:41.879 EAL: Calling mem event callback 'spdk:(nil)' 00:21:41.879 EAL: request: mp_malloc_sync 00:21:41.879 EAL: No shared files mode enabled, IPC is disabled 00:21:41.879 EAL: Heap on socket 0 was shrunk by 18MB 00:21:41.879 EAL: Trying to obtain current memory policy. 00:21:41.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:21:41.879 EAL: Restoring previous memory policy: 4 00:21:41.879 EAL: Calling mem event callback 'spdk:(nil)' 00:21:41.879 EAL: request: mp_malloc_sync 00:21:41.879 EAL: No shared files mode enabled, IPC is disabled 00:21:41.879 EAL: Heap on socket 0 was expanded by 34MB 00:21:41.879 EAL: Calling mem event callback 'spdk:(nil)' 00:21:41.879 EAL: request: mp_malloc_sync 00:21:41.879 EAL: No shared files mode enabled, IPC is disabled 00:21:41.879 EAL: Heap on socket 0 was shrunk by 34MB 00:21:41.879 EAL: Trying to obtain current memory policy. 00:21:41.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:21:41.879 EAL: Restoring previous memory policy: 4 00:21:41.879 EAL: Calling mem event callback 'spdk:(nil)' 00:21:41.879 EAL: request: mp_malloc_sync 00:21:41.879 EAL: No shared files mode enabled, IPC is disabled 00:21:41.879 EAL: Heap on socket 0 was expanded by 66MB 00:21:42.136 EAL: Calling mem event callback 'spdk:(nil)' 00:21:42.136 EAL: request: mp_malloc_sync 00:21:42.136 EAL: No shared files mode enabled, IPC is disabled 00:21:42.136 EAL: Heap on socket 0 was shrunk by 66MB 00:21:42.136 EAL: Trying to obtain current memory policy. 00:21:42.136 EAL: Setting policy MPOL_PREFERRED for socket 0 00:21:42.136 EAL: Restoring previous memory policy: 4 00:21:42.136 EAL: Calling mem event callback 'spdk:(nil)' 00:21:42.136 EAL: request: mp_malloc_sync 00:21:42.136 EAL: No shared files mode enabled, IPC is disabled 00:21:42.136 EAL: Heap on socket 0 was expanded by 130MB 00:21:42.394 EAL: Calling mem event callback 'spdk:(nil)' 00:21:42.394 EAL: request: mp_malloc_sync 00:21:42.394 EAL: No shared files mode enabled, IPC is disabled 00:21:42.394 EAL: Heap on socket 0 was shrunk by 130MB 00:21:42.652 EAL: Trying to obtain current memory policy. 00:21:42.652 EAL: Setting policy MPOL_PREFERRED for socket 0 00:21:42.652 EAL: Restoring previous memory policy: 4 00:21:42.652 EAL: Calling mem event callback 'spdk:(nil)' 00:21:42.652 EAL: request: mp_malloc_sync 00:21:42.652 EAL: No shared files mode enabled, IPC is disabled 00:21:42.652 EAL: Heap on socket 0 was expanded by 258MB 00:21:43.218 EAL: Calling mem event callback 'spdk:(nil)' 00:21:43.218 EAL: request: mp_malloc_sync 00:21:43.218 EAL: No shared files mode enabled, IPC is disabled 00:21:43.218 EAL: Heap on socket 0 was shrunk by 258MB 00:21:43.781 EAL: Trying to obtain current memory policy. 00:21:43.781 EAL: Setting policy MPOL_PREFERRED for socket 0 00:21:43.781 EAL: Restoring previous memory policy: 4 00:21:43.781 EAL: Calling mem event callback 'spdk:(nil)' 00:21:43.781 EAL: request: mp_malloc_sync 00:21:43.781 EAL: No shared files mode enabled, IPC is disabled 00:21:43.781 EAL: Heap on socket 0 was expanded by 514MB 00:21:44.714 EAL: Calling mem event callback 'spdk:(nil)' 00:21:44.970 EAL: request: mp_malloc_sync 00:21:44.970 EAL: No shared files mode enabled, IPC is disabled 00:21:44.970 EAL: Heap on socket 0 was shrunk by 514MB 00:21:45.901 EAL: Trying to obtain current memory policy. 00:21:45.901 EAL: Setting policy MPOL_PREFERRED for socket 0 00:21:46.159 EAL: Restoring previous memory policy: 4 00:21:46.159 EAL: Calling mem event callback 'spdk:(nil)' 00:21:46.159 EAL: request: mp_malloc_sync 00:21:46.159 EAL: No shared files mode enabled, IPC is disabled 00:21:46.159 EAL: Heap on socket 0 was expanded by 1026MB 00:21:48.063 EAL: Calling mem event callback 'spdk:(nil)' 00:21:48.326 EAL: request: mp_malloc_sync 00:21:48.326 EAL: No shared files mode enabled, IPC is disabled 00:21:48.326 EAL: Heap on socket 0 was shrunk by 1026MB 00:21:49.704 passed 00:21:49.704 00:21:49.704 Run Summary: Type Total Ran Passed Failed Inactive 00:21:49.704 suites 1 1 n/a 0 0 00:21:49.704 tests 2 2 2 0 0 00:21:49.704 asserts 5411 5411 5411 0 n/a 00:21:49.704 00:21:49.704 Elapsed time = 8.507 seconds 00:21:49.704 EAL: Calling mem event callback 'spdk:(nil)' 00:21:49.704 EAL: request: mp_malloc_sync 00:21:49.704 EAL: No shared files mode enabled, IPC is disabled 00:21:49.704 EAL: Heap on socket 0 was shrunk by 2MB 00:21:49.704 EAL: No shared files mode enabled, IPC is disabled 00:21:49.704 EAL: No shared files mode enabled, IPC is disabled 00:21:49.704 EAL: No shared files mode enabled, IPC is disabled 00:21:49.704 00:21:49.704 real 0m8.869s 00:21:49.704 user 0m7.329s 00:21:49.704 sys 0m1.360s 00:21:49.704 ************************************ 00:21:49.704 END TEST env_vtophys 00:21:49.704 ************************************ 00:21:49.704 07:30:28 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:49.704 07:30:28 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:21:49.704 07:30:28 env -- common/autotest_common.sh@1142 -- # return 0 00:21:49.704 07:30:28 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:21:49.704 07:30:28 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:49.704 07:30:28 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:49.704 07:30:28 env -- common/autotest_common.sh@10 -- # set +x 00:21:49.704 ************************************ 00:21:49.704 START TEST env_pci 00:21:49.704 ************************************ 00:21:49.704 07:30:28 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:21:49.704 00:21:49.704 00:21:49.704 CUnit - A unit testing framework for C - Version 2.1-3 00:21:49.704 http://cunit.sourceforge.net/ 00:21:49.704 00:21:49.704 00:21:49.704 Suite: pci 00:21:49.704 Test: pci_hook ...[2024-07-15 07:30:28.308999] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 61966 has claimed it 00:21:49.962 passed 00:21:49.962 00:21:49.962 Run Summary: Type Total Ran Passed Failed Inactive 00:21:49.962 suites 1 1 n/a 0 0 00:21:49.962 tests 1 1 1 0 0 00:21:49.962 asserts 25 25 25 0 n/a 00:21:49.962 00:21:49.962 Elapsed time = 0.009 seconds 00:21:49.962 EAL: Cannot find device (10000:00:01.0) 00:21:49.962 EAL: Failed to attach device on primary process 00:21:49.962 ************************************ 00:21:49.962 END TEST env_pci 00:21:49.962 ************************************ 00:21:49.962 00:21:49.962 real 0m0.083s 00:21:49.962 user 0m0.035s 00:21:49.962 sys 0m0.047s 00:21:49.962 07:30:28 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:49.962 07:30:28 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:21:49.962 07:30:28 env -- common/autotest_common.sh@1142 -- # return 0 00:21:49.962 07:30:28 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:21:49.962 07:30:28 env -- env/env.sh@15 -- # uname 00:21:49.962 07:30:28 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:21:49.962 07:30:28 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:21:49.962 07:30:28 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:21:49.962 07:30:28 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:49.962 07:30:28 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:49.962 07:30:28 env -- common/autotest_common.sh@10 -- # set +x 00:21:49.962 ************************************ 00:21:49.962 START TEST env_dpdk_post_init 00:21:49.962 ************************************ 00:21:49.962 07:30:28 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:21:49.962 EAL: Detected CPU lcores: 10 00:21:49.962 EAL: Detected NUMA nodes: 1 00:21:49.962 EAL: Detected shared linkage of DPDK 00:21:49.962 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:21:49.962 EAL: Selected IOVA mode 'PA' 00:21:50.220 TELEMETRY: No legacy callbacks, legacy socket not created 00:21:50.220 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:21:50.220 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:21:50.220 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:21:50.220 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:21:50.220 Starting DPDK initialization... 00:21:50.220 Starting SPDK post initialization... 00:21:50.220 SPDK NVMe probe 00:21:50.220 Attaching to 0000:00:10.0 00:21:50.220 Attaching to 0000:00:11.0 00:21:50.220 Attaching to 0000:00:12.0 00:21:50.220 Attaching to 0000:00:13.0 00:21:50.220 Attached to 0000:00:10.0 00:21:50.220 Attached to 0000:00:11.0 00:21:50.220 Attached to 0000:00:13.0 00:21:50.220 Attached to 0000:00:12.0 00:21:50.220 Cleaning up... 00:21:50.220 ************************************ 00:21:50.220 END TEST env_dpdk_post_init 00:21:50.220 ************************************ 00:21:50.220 00:21:50.220 real 0m0.302s 00:21:50.220 user 0m0.096s 00:21:50.220 sys 0m0.109s 00:21:50.220 07:30:28 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:50.220 07:30:28 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:21:50.220 07:30:28 env -- common/autotest_common.sh@1142 -- # return 0 00:21:50.220 07:30:28 env -- env/env.sh@26 -- # uname 00:21:50.220 07:30:28 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:21:50.220 07:30:28 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:21:50.220 07:30:28 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:50.220 07:30:28 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:50.220 07:30:28 env -- common/autotest_common.sh@10 -- # set +x 00:21:50.220 ************************************ 00:21:50.220 START TEST env_mem_callbacks 00:21:50.220 ************************************ 00:21:50.220 07:30:28 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:21:50.220 EAL: Detected CPU lcores: 10 00:21:50.220 EAL: Detected NUMA nodes: 1 00:21:50.220 EAL: Detected shared linkage of DPDK 00:21:50.479 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:21:50.479 EAL: Selected IOVA mode 'PA' 00:21:50.479 00:21:50.479 00:21:50.479 CUnit - A unit testing framework for C - Version 2.1-3 00:21:50.479 http://cunit.sourceforge.net/ 00:21:50.479 00:21:50.479 00:21:50.479 Suite: memory 00:21:50.479 Test: test ... 00:21:50.479 TELEMETRY: No legacy callbacks, legacy socket not created 00:21:50.479 register 0x200000200000 2097152 00:21:50.479 malloc 3145728 00:21:50.479 register 0x200000400000 4194304 00:21:50.479 buf 0x2000004fffc0 len 3145728 PASSED 00:21:50.479 malloc 64 00:21:50.479 buf 0x2000004ffec0 len 64 PASSED 00:21:50.479 malloc 4194304 00:21:50.479 register 0x200000800000 6291456 00:21:50.479 buf 0x2000009fffc0 len 4194304 PASSED 00:21:50.479 free 0x2000004fffc0 3145728 00:21:50.479 free 0x2000004ffec0 64 00:21:50.479 unregister 0x200000400000 4194304 PASSED 00:21:50.479 free 0x2000009fffc0 4194304 00:21:50.479 unregister 0x200000800000 6291456 PASSED 00:21:50.479 malloc 8388608 00:21:50.479 register 0x200000400000 10485760 00:21:50.479 buf 0x2000005fffc0 len 8388608 PASSED 00:21:50.479 free 0x2000005fffc0 8388608 00:21:50.479 unregister 0x200000400000 10485760 PASSED 00:21:50.479 passed 00:21:50.479 00:21:50.479 Run Summary: Type Total Ran Passed Failed Inactive 00:21:50.479 suites 1 1 n/a 0 0 00:21:50.479 tests 1 1 1 0 0 00:21:50.479 asserts 15 15 15 0 n/a 00:21:50.479 00:21:50.479 Elapsed time = 0.059 seconds 00:21:50.479 00:21:50.479 real 0m0.271s 00:21:50.479 user 0m0.092s 00:21:50.479 sys 0m0.075s 00:21:50.479 07:30:29 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:50.479 07:30:29 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:21:50.479 ************************************ 00:21:50.479 END TEST env_mem_callbacks 00:21:50.479 ************************************ 00:21:50.479 07:30:29 env -- common/autotest_common.sh@1142 -- # return 0 00:21:50.479 ************************************ 00:21:50.479 END TEST env 00:21:50.479 ************************************ 00:21:50.479 00:21:50.479 real 0m10.191s 00:21:50.479 user 0m7.944s 00:21:50.479 sys 0m1.829s 00:21:50.479 07:30:29 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:50.479 07:30:29 env -- common/autotest_common.sh@10 -- # set +x 00:21:50.737 07:30:29 -- common/autotest_common.sh@1142 -- # return 0 00:21:50.737 07:30:29 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:21:50.737 07:30:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:50.737 07:30:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:50.737 07:30:29 -- common/autotest_common.sh@10 -- # set +x 00:21:50.737 ************************************ 00:21:50.737 START TEST rpc 00:21:50.737 ************************************ 00:21:50.737 07:30:29 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:21:50.737 * Looking for test storage... 00:21:50.737 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:21:50.737 07:30:29 rpc -- rpc/rpc.sh@65 -- # spdk_pid=62085 00:21:50.737 07:30:29 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:21:50.737 07:30:29 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:21:50.737 07:30:29 rpc -- rpc/rpc.sh@67 -- # waitforlisten 62085 00:21:50.737 07:30:29 rpc -- common/autotest_common.sh@829 -- # '[' -z 62085 ']' 00:21:50.737 07:30:29 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.737 07:30:29 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:50.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.737 07:30:29 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.737 07:30:29 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:50.737 07:30:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:21:50.737 [2024-07-15 07:30:29.349748] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:21:50.737 [2024-07-15 07:30:29.350049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62085 ] 00:21:50.995 [2024-07-15 07:30:29.533157] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.253 [2024-07-15 07:30:29.829795] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:21:51.253 [2024-07-15 07:30:29.829895] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 62085' to capture a snapshot of events at runtime. 00:21:51.253 [2024-07-15 07:30:29.829934] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:51.253 [2024-07-15 07:30:29.829965] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:51.253 [2024-07-15 07:30:29.829981] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid62085 for offline analysis/debug. 00:21:51.253 [2024-07-15 07:30:29.830042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.186 07:30:30 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:52.186 07:30:30 rpc -- common/autotest_common.sh@862 -- # return 0 00:21:52.186 07:30:30 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:21:52.186 07:30:30 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:21:52.186 07:30:30 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:21:52.186 07:30:30 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:21:52.186 07:30:30 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:52.186 07:30:30 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:52.186 07:30:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:21:52.186 ************************************ 00:21:52.186 START TEST rpc_integrity 00:21:52.186 ************************************ 00:21:52.186 07:30:30 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:21:52.186 07:30:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:21:52.186 07:30:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.186 07:30:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:52.186 07:30:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.186 07:30:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:21:52.186 07:30:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:21:52.443 07:30:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:21:52.443 07:30:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:21:52.443 07:30:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.443 07:30:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:52.443 07:30:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.443 07:30:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:21:52.443 07:30:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:21:52.443 07:30:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.443 07:30:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:52.443 07:30:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.443 07:30:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:21:52.443 { 00:21:52.443 "name": "Malloc0", 00:21:52.443 "aliases": [ 00:21:52.443 "455dd277-194e-4731-924f-42360a30faa1" 00:21:52.443 ], 00:21:52.443 "product_name": "Malloc disk", 00:21:52.443 "block_size": 512, 00:21:52.443 "num_blocks": 16384, 00:21:52.443 "uuid": "455dd277-194e-4731-924f-42360a30faa1", 00:21:52.443 "assigned_rate_limits": { 00:21:52.443 "rw_ios_per_sec": 0, 00:21:52.443 "rw_mbytes_per_sec": 0, 00:21:52.443 "r_mbytes_per_sec": 0, 00:21:52.443 "w_mbytes_per_sec": 0 00:21:52.443 }, 00:21:52.443 "claimed": false, 00:21:52.443 "zoned": false, 00:21:52.443 "supported_io_types": { 00:21:52.443 "read": true, 00:21:52.443 "write": true, 00:21:52.443 "unmap": true, 00:21:52.444 "flush": true, 00:21:52.444 "reset": true, 00:21:52.444 "nvme_admin": false, 00:21:52.444 "nvme_io": false, 00:21:52.444 "nvme_io_md": false, 00:21:52.444 "write_zeroes": true, 00:21:52.444 "zcopy": true, 00:21:52.444 "get_zone_info": false, 00:21:52.444 "zone_management": false, 00:21:52.444 "zone_append": false, 00:21:52.444 "compare": false, 00:21:52.444 "compare_and_write": false, 00:21:52.444 "abort": true, 00:21:52.444 "seek_hole": false, 00:21:52.444 "seek_data": false, 00:21:52.444 "copy": true, 00:21:52.444 "nvme_iov_md": false 00:21:52.444 }, 00:21:52.444 "memory_domains": [ 00:21:52.444 { 00:21:52.444 "dma_device_id": "system", 00:21:52.444 "dma_device_type": 1 00:21:52.444 }, 00:21:52.444 { 00:21:52.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.444 "dma_device_type": 2 00:21:52.444 } 00:21:52.444 ], 00:21:52.444 "driver_specific": {} 00:21:52.444 } 00:21:52.444 ]' 00:21:52.444 07:30:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:21:52.444 07:30:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:21:52.444 07:30:30 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:21:52.444 07:30:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.444 07:30:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:52.444 [2024-07-15 07:30:30.915941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:21:52.444 [2024-07-15 07:30:30.916057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:52.444 [2024-07-15 07:30:30.916107] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:52.444 [2024-07-15 07:30:30.916126] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:52.444 [2024-07-15 07:30:30.919239] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:52.444 [2024-07-15 07:30:30.919303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:21:52.444 Passthru0 00:21:52.444 07:30:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.444 07:30:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:21:52.444 07:30:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.444 07:30:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:52.444 07:30:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.444 07:30:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:21:52.444 { 00:21:52.444 "name": "Malloc0", 00:21:52.444 "aliases": [ 00:21:52.444 "455dd277-194e-4731-924f-42360a30faa1" 00:21:52.444 ], 00:21:52.444 "product_name": "Malloc disk", 00:21:52.444 "block_size": 512, 00:21:52.444 "num_blocks": 16384, 00:21:52.444 "uuid": "455dd277-194e-4731-924f-42360a30faa1", 00:21:52.444 "assigned_rate_limits": { 00:21:52.444 "rw_ios_per_sec": 0, 00:21:52.444 "rw_mbytes_per_sec": 0, 00:21:52.444 "r_mbytes_per_sec": 0, 00:21:52.444 "w_mbytes_per_sec": 0 00:21:52.444 }, 00:21:52.444 "claimed": true, 00:21:52.444 "claim_type": "exclusive_write", 00:21:52.444 "zoned": false, 00:21:52.444 "supported_io_types": { 00:21:52.444 "read": true, 00:21:52.444 "write": true, 00:21:52.444 "unmap": true, 00:21:52.444 "flush": true, 00:21:52.444 "reset": true, 00:21:52.444 "nvme_admin": false, 00:21:52.444 "nvme_io": false, 00:21:52.444 "nvme_io_md": false, 00:21:52.444 "write_zeroes": true, 00:21:52.444 "zcopy": true, 00:21:52.444 "get_zone_info": false, 00:21:52.444 "zone_management": false, 00:21:52.444 "zone_append": false, 00:21:52.444 "compare": false, 00:21:52.444 "compare_and_write": false, 00:21:52.444 "abort": true, 00:21:52.444 "seek_hole": false, 00:21:52.444 "seek_data": false, 00:21:52.444 "copy": true, 00:21:52.444 "nvme_iov_md": false 00:21:52.444 }, 00:21:52.444 "memory_domains": [ 00:21:52.444 { 00:21:52.444 "dma_device_id": "system", 00:21:52.444 "dma_device_type": 1 00:21:52.444 }, 00:21:52.444 { 00:21:52.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.444 "dma_device_type": 2 00:21:52.444 } 00:21:52.444 ], 00:21:52.444 "driver_specific": {} 00:21:52.444 }, 00:21:52.444 { 00:21:52.444 "name": "Passthru0", 00:21:52.444 "aliases": [ 00:21:52.444 "fbe14a15-78e0-5bc1-b88b-f7a41d31af57" 00:21:52.444 ], 00:21:52.444 "product_name": "passthru", 00:21:52.444 "block_size": 512, 00:21:52.444 "num_blocks": 16384, 00:21:52.444 "uuid": "fbe14a15-78e0-5bc1-b88b-f7a41d31af57", 00:21:52.444 "assigned_rate_limits": { 00:21:52.444 "rw_ios_per_sec": 0, 00:21:52.444 "rw_mbytes_per_sec": 0, 00:21:52.444 "r_mbytes_per_sec": 0, 00:21:52.444 "w_mbytes_per_sec": 0 00:21:52.444 }, 00:21:52.444 "claimed": false, 00:21:52.444 "zoned": false, 00:21:52.444 "supported_io_types": { 00:21:52.444 "read": true, 00:21:52.444 "write": true, 00:21:52.444 "unmap": true, 00:21:52.444 "flush": true, 00:21:52.444 "reset": true, 00:21:52.444 "nvme_admin": false, 00:21:52.444 "nvme_io": false, 00:21:52.444 "nvme_io_md": false, 00:21:52.444 "write_zeroes": true, 00:21:52.444 "zcopy": true, 00:21:52.444 "get_zone_info": false, 00:21:52.444 "zone_management": false, 00:21:52.444 "zone_append": false, 00:21:52.444 "compare": false, 00:21:52.444 "compare_and_write": false, 00:21:52.444 "abort": true, 00:21:52.444 "seek_hole": false, 00:21:52.444 "seek_data": false, 00:21:52.444 "copy": true, 00:21:52.444 "nvme_iov_md": false 00:21:52.444 }, 00:21:52.444 "memory_domains": [ 00:21:52.444 { 00:21:52.444 "dma_device_id": "system", 00:21:52.444 "dma_device_type": 1 00:21:52.444 }, 00:21:52.444 { 00:21:52.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.444 "dma_device_type": 2 00:21:52.444 } 00:21:52.444 ], 00:21:52.444 "driver_specific": { 00:21:52.444 "passthru": { 00:21:52.444 "name": "Passthru0", 00:21:52.444 "base_bdev_name": "Malloc0" 00:21:52.444 } 00:21:52.444 } 00:21:52.444 } 00:21:52.444 ]' 00:21:52.444 07:30:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:21:52.444 07:30:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:21:52.444 07:30:31 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:21:52.444 07:30:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.444 07:30:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:52.444 07:30:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.444 07:30:31 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:52.444 07:30:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.444 07:30:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:52.444 07:30:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.444 07:30:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:21:52.444 07:30:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.444 07:30:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:52.444 07:30:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.703 07:30:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:21:52.703 07:30:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:21:52.703 ************************************ 00:21:52.703 END TEST rpc_integrity 00:21:52.703 ************************************ 00:21:52.703 07:30:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:21:52.703 00:21:52.703 real 0m0.349s 00:21:52.703 user 0m0.206s 00:21:52.703 sys 0m0.040s 00:21:52.703 07:30:31 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:52.703 07:30:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:52.703 07:30:31 rpc -- common/autotest_common.sh@1142 -- # return 0 00:21:52.703 07:30:31 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:21:52.703 07:30:31 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:52.703 07:30:31 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:52.703 07:30:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:21:52.703 ************************************ 00:21:52.703 START TEST rpc_plugins 00:21:52.703 ************************************ 00:21:52.703 07:30:31 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:21:52.703 07:30:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:21:52.703 07:30:31 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.703 07:30:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:21:52.703 07:30:31 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.703 07:30:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:21:52.703 07:30:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:21:52.703 07:30:31 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.703 07:30:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:21:52.703 07:30:31 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.703 07:30:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:21:52.703 { 00:21:52.703 "name": "Malloc1", 00:21:52.703 "aliases": [ 00:21:52.703 "40e9767c-c227-476f-ab88-f3f16e3cf6c6" 00:21:52.703 ], 00:21:52.703 "product_name": "Malloc disk", 00:21:52.703 "block_size": 4096, 00:21:52.703 "num_blocks": 256, 00:21:52.703 "uuid": "40e9767c-c227-476f-ab88-f3f16e3cf6c6", 00:21:52.703 "assigned_rate_limits": { 00:21:52.703 "rw_ios_per_sec": 0, 00:21:52.703 "rw_mbytes_per_sec": 0, 00:21:52.703 "r_mbytes_per_sec": 0, 00:21:52.703 "w_mbytes_per_sec": 0 00:21:52.703 }, 00:21:52.703 "claimed": false, 00:21:52.703 "zoned": false, 00:21:52.703 "supported_io_types": { 00:21:52.703 "read": true, 00:21:52.703 "write": true, 00:21:52.703 "unmap": true, 00:21:52.703 "flush": true, 00:21:52.703 "reset": true, 00:21:52.703 "nvme_admin": false, 00:21:52.703 "nvme_io": false, 00:21:52.703 "nvme_io_md": false, 00:21:52.703 "write_zeroes": true, 00:21:52.703 "zcopy": true, 00:21:52.703 "get_zone_info": false, 00:21:52.703 "zone_management": false, 00:21:52.703 "zone_append": false, 00:21:52.703 "compare": false, 00:21:52.703 "compare_and_write": false, 00:21:52.703 "abort": true, 00:21:52.703 "seek_hole": false, 00:21:52.703 "seek_data": false, 00:21:52.703 "copy": true, 00:21:52.703 "nvme_iov_md": false 00:21:52.703 }, 00:21:52.703 "memory_domains": [ 00:21:52.703 { 00:21:52.703 "dma_device_id": "system", 00:21:52.703 "dma_device_type": 1 00:21:52.703 }, 00:21:52.703 { 00:21:52.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.703 "dma_device_type": 2 00:21:52.703 } 00:21:52.703 ], 00:21:52.703 "driver_specific": {} 00:21:52.703 } 00:21:52.703 ]' 00:21:52.703 07:30:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:21:52.703 07:30:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:21:52.703 07:30:31 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:21:52.703 07:30:31 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.703 07:30:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:21:52.703 07:30:31 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.703 07:30:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:21:52.703 07:30:31 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.703 07:30:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:21:52.703 07:30:31 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.703 07:30:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:21:52.703 07:30:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:21:52.962 ************************************ 00:21:52.962 END TEST rpc_plugins 00:21:52.962 ************************************ 00:21:52.962 07:30:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:21:52.962 00:21:52.962 real 0m0.179s 00:21:52.962 user 0m0.109s 00:21:52.962 sys 0m0.024s 00:21:52.962 07:30:31 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:52.962 07:30:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:21:52.962 07:30:31 rpc -- common/autotest_common.sh@1142 -- # return 0 00:21:52.962 07:30:31 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:21:52.962 07:30:31 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:52.962 07:30:31 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:52.962 07:30:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:21:52.962 ************************************ 00:21:52.962 START TEST rpc_trace_cmd_test 00:21:52.962 ************************************ 00:21:52.962 07:30:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:21:52.962 07:30:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:21:52.962 07:30:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:21:52.962 07:30:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.962 07:30:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.962 07:30:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.962 07:30:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:21:52.962 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid62085", 00:21:52.962 "tpoint_group_mask": "0x8", 00:21:52.962 "iscsi_conn": { 00:21:52.962 "mask": "0x2", 00:21:52.962 "tpoint_mask": "0x0" 00:21:52.962 }, 00:21:52.962 "scsi": { 00:21:52.962 "mask": "0x4", 00:21:52.962 "tpoint_mask": "0x0" 00:21:52.962 }, 00:21:52.962 "bdev": { 00:21:52.962 "mask": "0x8", 00:21:52.962 "tpoint_mask": "0xffffffffffffffff" 00:21:52.962 }, 00:21:52.962 "nvmf_rdma": { 00:21:52.962 "mask": "0x10", 00:21:52.962 "tpoint_mask": "0x0" 00:21:52.962 }, 00:21:52.962 "nvmf_tcp": { 00:21:52.962 "mask": "0x20", 00:21:52.962 "tpoint_mask": "0x0" 00:21:52.962 }, 00:21:52.962 "ftl": { 00:21:52.962 "mask": "0x40", 00:21:52.962 "tpoint_mask": "0x0" 00:21:52.962 }, 00:21:52.962 "blobfs": { 00:21:52.962 "mask": "0x80", 00:21:52.962 "tpoint_mask": "0x0" 00:21:52.962 }, 00:21:52.962 "dsa": { 00:21:52.962 "mask": "0x200", 00:21:52.962 "tpoint_mask": "0x0" 00:21:52.962 }, 00:21:52.962 "thread": { 00:21:52.962 "mask": "0x400", 00:21:52.962 "tpoint_mask": "0x0" 00:21:52.962 }, 00:21:52.962 "nvme_pcie": { 00:21:52.962 "mask": "0x800", 00:21:52.962 "tpoint_mask": "0x0" 00:21:52.962 }, 00:21:52.962 "iaa": { 00:21:52.962 "mask": "0x1000", 00:21:52.962 "tpoint_mask": "0x0" 00:21:52.962 }, 00:21:52.962 "nvme_tcp": { 00:21:52.962 "mask": "0x2000", 00:21:52.962 "tpoint_mask": "0x0" 00:21:52.962 }, 00:21:52.962 "bdev_nvme": { 00:21:52.962 "mask": "0x4000", 00:21:52.962 "tpoint_mask": "0x0" 00:21:52.962 }, 00:21:52.962 "sock": { 00:21:52.962 "mask": "0x8000", 00:21:52.962 "tpoint_mask": "0x0" 00:21:52.962 } 00:21:52.962 }' 00:21:52.962 07:30:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:21:52.962 07:30:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:21:52.962 07:30:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:21:52.962 07:30:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:21:52.962 07:30:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:21:52.962 07:30:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:21:52.962 07:30:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:21:53.220 07:30:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:21:53.220 07:30:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:21:53.220 ************************************ 00:21:53.220 END TEST rpc_trace_cmd_test 00:21:53.220 ************************************ 00:21:53.220 07:30:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:21:53.220 00:21:53.220 real 0m0.265s 00:21:53.220 user 0m0.234s 00:21:53.220 sys 0m0.022s 00:21:53.220 07:30:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:53.220 07:30:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.220 07:30:31 rpc -- common/autotest_common.sh@1142 -- # return 0 00:21:53.220 07:30:31 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:21:53.220 07:30:31 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:21:53.220 07:30:31 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:21:53.220 07:30:31 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:53.220 07:30:31 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:53.220 07:30:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:21:53.220 ************************************ 00:21:53.220 START TEST rpc_daemon_integrity 00:21:53.220 ************************************ 00:21:53.220 07:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:21:53.220 07:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:21:53.220 07:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.220 07:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:53.220 07:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.220 07:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:21:53.220 07:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:21:53.220 07:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:21:53.220 07:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:21:53.220 07:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.220 07:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:53.220 07:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.220 07:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:21:53.220 07:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:21:53.220 07:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.220 07:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:53.220 07:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.220 07:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:21:53.220 { 00:21:53.220 "name": "Malloc2", 00:21:53.220 "aliases": [ 00:21:53.220 "4ba1df08-f2ef-48c7-ac92-4d25e6910645" 00:21:53.220 ], 00:21:53.220 "product_name": "Malloc disk", 00:21:53.220 "block_size": 512, 00:21:53.220 "num_blocks": 16384, 00:21:53.220 "uuid": "4ba1df08-f2ef-48c7-ac92-4d25e6910645", 00:21:53.220 "assigned_rate_limits": { 00:21:53.220 "rw_ios_per_sec": 0, 00:21:53.220 "rw_mbytes_per_sec": 0, 00:21:53.220 "r_mbytes_per_sec": 0, 00:21:53.220 "w_mbytes_per_sec": 0 00:21:53.220 }, 00:21:53.220 "claimed": false, 00:21:53.220 "zoned": false, 00:21:53.220 "supported_io_types": { 00:21:53.220 "read": true, 00:21:53.220 "write": true, 00:21:53.220 "unmap": true, 00:21:53.220 "flush": true, 00:21:53.220 "reset": true, 00:21:53.220 "nvme_admin": false, 00:21:53.220 "nvme_io": false, 00:21:53.220 "nvme_io_md": false, 00:21:53.220 "write_zeroes": true, 00:21:53.220 "zcopy": true, 00:21:53.220 "get_zone_info": false, 00:21:53.220 "zone_management": false, 00:21:53.220 "zone_append": false, 00:21:53.220 "compare": false, 00:21:53.220 "compare_and_write": false, 00:21:53.220 "abort": true, 00:21:53.220 "seek_hole": false, 00:21:53.220 "seek_data": false, 00:21:53.220 "copy": true, 00:21:53.220 "nvme_iov_md": false 00:21:53.220 }, 00:21:53.220 "memory_domains": [ 00:21:53.220 { 00:21:53.220 "dma_device_id": "system", 00:21:53.221 "dma_device_type": 1 00:21:53.221 }, 00:21:53.221 { 00:21:53.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:53.221 "dma_device_type": 2 00:21:53.221 } 00:21:53.221 ], 00:21:53.221 "driver_specific": {} 00:21:53.221 } 00:21:53.221 ]' 00:21:53.221 07:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:21:53.479 07:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:21:53.479 07:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:21:53.479 07:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.479 07:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:53.479 [2024-07-15 07:30:31.870597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:21:53.479 [2024-07-15 07:30:31.870830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:53.479 [2024-07-15 07:30:31.870887] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:53.479 [2024-07-15 07:30:31.870907] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:53.479 [2024-07-15 07:30:31.874230] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:53.479 [2024-07-15 07:30:31.874276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:21:53.479 Passthru0 00:21:53.479 07:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.479 07:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:21:53.479 07:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.479 07:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:53.479 07:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.479 07:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:21:53.479 { 00:21:53.479 "name": "Malloc2", 00:21:53.479 "aliases": [ 00:21:53.479 "4ba1df08-f2ef-48c7-ac92-4d25e6910645" 00:21:53.479 ], 00:21:53.479 "product_name": "Malloc disk", 00:21:53.479 "block_size": 512, 00:21:53.479 "num_blocks": 16384, 00:21:53.479 "uuid": "4ba1df08-f2ef-48c7-ac92-4d25e6910645", 00:21:53.479 "assigned_rate_limits": { 00:21:53.479 "rw_ios_per_sec": 0, 00:21:53.479 "rw_mbytes_per_sec": 0, 00:21:53.479 "r_mbytes_per_sec": 0, 00:21:53.479 "w_mbytes_per_sec": 0 00:21:53.479 }, 00:21:53.479 "claimed": true, 00:21:53.479 "claim_type": "exclusive_write", 00:21:53.479 "zoned": false, 00:21:53.479 "supported_io_types": { 00:21:53.479 "read": true, 00:21:53.479 "write": true, 00:21:53.479 "unmap": true, 00:21:53.479 "flush": true, 00:21:53.479 "reset": true, 00:21:53.479 "nvme_admin": false, 00:21:53.479 "nvme_io": false, 00:21:53.479 "nvme_io_md": false, 00:21:53.479 "write_zeroes": true, 00:21:53.479 "zcopy": true, 00:21:53.479 "get_zone_info": false, 00:21:53.479 "zone_management": false, 00:21:53.479 "zone_append": false, 00:21:53.479 "compare": false, 00:21:53.479 "compare_and_write": false, 00:21:53.479 "abort": true, 00:21:53.479 "seek_hole": false, 00:21:53.479 "seek_data": false, 00:21:53.479 "copy": true, 00:21:53.479 "nvme_iov_md": false 00:21:53.479 }, 00:21:53.479 "memory_domains": [ 00:21:53.479 { 00:21:53.479 "dma_device_id": "system", 00:21:53.479 "dma_device_type": 1 00:21:53.479 }, 00:21:53.479 { 00:21:53.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:53.479 "dma_device_type": 2 00:21:53.479 } 00:21:53.479 ], 00:21:53.479 "driver_specific": {} 00:21:53.479 }, 00:21:53.479 { 00:21:53.479 "name": "Passthru0", 00:21:53.479 "aliases": [ 00:21:53.479 "eb648832-a28c-5829-b289-ed7c624788cd" 00:21:53.479 ], 00:21:53.479 "product_name": "passthru", 00:21:53.479 "block_size": 512, 00:21:53.479 "num_blocks": 16384, 00:21:53.479 "uuid": "eb648832-a28c-5829-b289-ed7c624788cd", 00:21:53.479 "assigned_rate_limits": { 00:21:53.479 "rw_ios_per_sec": 0, 00:21:53.479 "rw_mbytes_per_sec": 0, 00:21:53.479 "r_mbytes_per_sec": 0, 00:21:53.479 "w_mbytes_per_sec": 0 00:21:53.479 }, 00:21:53.479 "claimed": false, 00:21:53.479 "zoned": false, 00:21:53.479 "supported_io_types": { 00:21:53.479 "read": true, 00:21:53.479 "write": true, 00:21:53.479 "unmap": true, 00:21:53.479 "flush": true, 00:21:53.479 "reset": true, 00:21:53.479 "nvme_admin": false, 00:21:53.479 "nvme_io": false, 00:21:53.479 "nvme_io_md": false, 00:21:53.479 "write_zeroes": true, 00:21:53.479 "zcopy": true, 00:21:53.479 "get_zone_info": false, 00:21:53.479 "zone_management": false, 00:21:53.479 "zone_append": false, 00:21:53.479 "compare": false, 00:21:53.479 "compare_and_write": false, 00:21:53.479 "abort": true, 00:21:53.479 "seek_hole": false, 00:21:53.479 "seek_data": false, 00:21:53.479 "copy": true, 00:21:53.479 "nvme_iov_md": false 00:21:53.479 }, 00:21:53.479 "memory_domains": [ 00:21:53.479 { 00:21:53.479 "dma_device_id": "system", 00:21:53.479 "dma_device_type": 1 00:21:53.479 }, 00:21:53.479 { 00:21:53.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:53.479 "dma_device_type": 2 00:21:53.479 } 00:21:53.479 ], 00:21:53.479 "driver_specific": { 00:21:53.479 "passthru": { 00:21:53.479 "name": "Passthru0", 00:21:53.479 "base_bdev_name": "Malloc2" 00:21:53.479 } 00:21:53.479 } 00:21:53.479 } 00:21:53.479 ]' 00:21:53.479 07:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:21:53.479 07:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:21:53.479 07:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:21:53.479 07:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.479 07:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:53.479 07:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.479 07:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:21:53.479 07:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.479 07:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:53.479 07:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.479 07:30:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:21:53.479 07:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.479 07:30:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:53.479 07:30:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.479 07:30:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:21:53.480 07:30:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:21:53.480 ************************************ 00:21:53.480 END TEST rpc_daemon_integrity 00:21:53.480 ************************************ 00:21:53.480 07:30:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:21:53.480 00:21:53.480 real 0m0.342s 00:21:53.480 user 0m0.204s 00:21:53.480 sys 0m0.039s 00:21:53.480 07:30:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:53.480 07:30:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:21:53.737 07:30:32 rpc -- common/autotest_common.sh@1142 -- # return 0 00:21:53.737 07:30:32 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:53.737 07:30:32 rpc -- rpc/rpc.sh@84 -- # killprocess 62085 00:21:53.737 07:30:32 rpc -- common/autotest_common.sh@948 -- # '[' -z 62085 ']' 00:21:53.737 07:30:32 rpc -- common/autotest_common.sh@952 -- # kill -0 62085 00:21:53.737 07:30:32 rpc -- common/autotest_common.sh@953 -- # uname 00:21:53.737 07:30:32 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:53.737 07:30:32 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62085 00:21:53.737 killing process with pid 62085 00:21:53.737 07:30:32 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:53.737 07:30:32 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:53.737 07:30:32 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62085' 00:21:53.737 07:30:32 rpc -- common/autotest_common.sh@967 -- # kill 62085 00:21:53.737 07:30:32 rpc -- common/autotest_common.sh@972 -- # wait 62085 00:21:56.263 00:21:56.263 real 0m5.487s 00:21:56.263 user 0m6.011s 00:21:56.263 sys 0m0.974s 00:21:56.263 07:30:34 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:56.263 ************************************ 00:21:56.263 END TEST rpc 00:21:56.263 ************************************ 00:21:56.263 07:30:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:21:56.263 07:30:34 -- common/autotest_common.sh@1142 -- # return 0 00:21:56.263 07:30:34 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:21:56.263 07:30:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:56.263 07:30:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:56.263 07:30:34 -- common/autotest_common.sh@10 -- # set +x 00:21:56.263 ************************************ 00:21:56.263 START TEST skip_rpc 00:21:56.263 ************************************ 00:21:56.263 07:30:34 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:21:56.263 * Looking for test storage... 00:21:56.263 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:21:56.263 07:30:34 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:21:56.263 07:30:34 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:21:56.263 07:30:34 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:21:56.263 07:30:34 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:56.263 07:30:34 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:56.263 07:30:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:56.263 ************************************ 00:21:56.263 START TEST skip_rpc 00:21:56.263 ************************************ 00:21:56.263 07:30:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:21:56.263 07:30:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=62311 00:21:56.263 07:30:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:21:56.263 07:30:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:21:56.263 07:30:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:21:56.521 [2024-07-15 07:30:34.891272] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:21:56.521 [2024-07-15 07:30:34.891490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62311 ] 00:21:56.521 [2024-07-15 07:30:35.071595] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.778 [2024-07-15 07:30:35.342972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.053 07:30:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:22:02.053 07:30:39 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:22:02.053 07:30:39 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:22:02.053 07:30:39 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:02.053 07:30:39 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:02.053 07:30:39 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:02.053 07:30:39 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:02.053 07:30:39 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:22:02.053 07:30:39 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.053 07:30:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:02.053 07:30:39 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:02.053 07:30:39 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:22:02.053 07:30:39 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:02.053 07:30:39 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:02.053 07:30:39 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:02.053 07:30:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:22:02.053 07:30:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 62311 00:22:02.053 07:30:39 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 62311 ']' 00:22:02.053 07:30:39 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 62311 00:22:02.053 07:30:39 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:22:02.053 07:30:39 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:02.053 07:30:39 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62311 00:22:02.053 07:30:39 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:02.053 killing process with pid 62311 00:22:02.053 07:30:39 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:02.053 07:30:39 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62311' 00:22:02.053 07:30:39 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 62311 00:22:02.053 07:30:39 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 62311 00:22:03.954 00:22:03.954 real 0m7.535s 00:22:03.954 user 0m6.851s 00:22:03.954 sys 0m0.570s 00:22:03.954 07:30:42 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:03.954 07:30:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:03.954 ************************************ 00:22:03.954 END TEST skip_rpc 00:22:03.954 ************************************ 00:22:03.954 07:30:42 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:22:03.954 07:30:42 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:22:03.954 07:30:42 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:03.954 07:30:42 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:03.954 07:30:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:03.954 ************************************ 00:22:03.954 START TEST skip_rpc_with_json 00:22:03.954 ************************************ 00:22:03.954 07:30:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:22:03.954 07:30:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:22:03.954 07:30:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=62421 00:22:03.954 07:30:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:22:03.954 07:30:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 62421 00:22:03.955 07:30:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:03.955 07:30:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 62421 ']' 00:22:03.955 07:30:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.955 07:30:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:03.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.955 07:30:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.955 07:30:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:03.955 07:30:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:22:03.955 [2024-07-15 07:30:42.456749] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:22:03.955 [2024-07-15 07:30:42.456949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62421 ] 00:22:04.227 [2024-07-15 07:30:42.623974] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.491 [2024-07-15 07:30:42.894765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.426 07:30:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:05.426 07:30:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:22:05.426 07:30:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:22:05.426 07:30:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.426 07:30:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:22:05.426 [2024-07-15 07:30:43.796233] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:22:05.426 request: 00:22:05.426 { 00:22:05.426 "trtype": "tcp", 00:22:05.426 "method": "nvmf_get_transports", 00:22:05.426 "req_id": 1 00:22:05.426 } 00:22:05.426 Got JSON-RPC error response 00:22:05.426 response: 00:22:05.426 { 00:22:05.426 "code": -19, 00:22:05.426 "message": "No such device" 00:22:05.426 } 00:22:05.426 07:30:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:05.426 07:30:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:22:05.426 07:30:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.426 07:30:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:22:05.426 [2024-07-15 07:30:43.808374] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.426 07:30:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.426 07:30:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:22:05.426 07:30:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.426 07:30:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:22:05.426 07:30:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.426 07:30:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:22:05.426 { 00:22:05.426 "subsystems": [ 00:22:05.426 { 00:22:05.426 "subsystem": "keyring", 00:22:05.426 "config": [] 00:22:05.426 }, 00:22:05.426 { 00:22:05.426 "subsystem": "iobuf", 00:22:05.426 "config": [ 00:22:05.426 { 00:22:05.426 "method": "iobuf_set_options", 00:22:05.426 "params": { 00:22:05.426 "small_pool_count": 8192, 00:22:05.426 "large_pool_count": 1024, 00:22:05.426 "small_bufsize": 8192, 00:22:05.426 "large_bufsize": 135168 00:22:05.426 } 00:22:05.426 } 00:22:05.426 ] 00:22:05.426 }, 00:22:05.426 { 00:22:05.426 "subsystem": "sock", 00:22:05.426 "config": [ 00:22:05.426 { 00:22:05.426 "method": "sock_set_default_impl", 00:22:05.426 "params": { 00:22:05.426 "impl_name": "posix" 00:22:05.426 } 00:22:05.426 }, 00:22:05.426 { 00:22:05.426 "method": "sock_impl_set_options", 00:22:05.426 "params": { 00:22:05.426 "impl_name": "ssl", 00:22:05.426 "recv_buf_size": 4096, 00:22:05.426 "send_buf_size": 4096, 00:22:05.426 "enable_recv_pipe": true, 00:22:05.426 "enable_quickack": false, 00:22:05.426 "enable_placement_id": 0, 00:22:05.426 "enable_zerocopy_send_server": true, 00:22:05.426 "enable_zerocopy_send_client": false, 00:22:05.426 "zerocopy_threshold": 0, 00:22:05.426 "tls_version": 0, 00:22:05.426 "enable_ktls": false 00:22:05.426 } 00:22:05.426 }, 00:22:05.426 { 00:22:05.426 "method": "sock_impl_set_options", 00:22:05.426 "params": { 00:22:05.426 "impl_name": "posix", 00:22:05.426 "recv_buf_size": 2097152, 00:22:05.426 "send_buf_size": 2097152, 00:22:05.426 "enable_recv_pipe": true, 00:22:05.426 "enable_quickack": false, 00:22:05.426 "enable_placement_id": 0, 00:22:05.426 "enable_zerocopy_send_server": true, 00:22:05.426 "enable_zerocopy_send_client": false, 00:22:05.426 "zerocopy_threshold": 0, 00:22:05.426 "tls_version": 0, 00:22:05.426 "enable_ktls": false 00:22:05.426 } 00:22:05.426 } 00:22:05.426 ] 00:22:05.426 }, 00:22:05.426 { 00:22:05.426 "subsystem": "vmd", 00:22:05.426 "config": [] 00:22:05.426 }, 00:22:05.426 { 00:22:05.426 "subsystem": "accel", 00:22:05.426 "config": [ 00:22:05.426 { 00:22:05.426 "method": "accel_set_options", 00:22:05.426 "params": { 00:22:05.426 "small_cache_size": 128, 00:22:05.426 "large_cache_size": 16, 00:22:05.426 "task_count": 2048, 00:22:05.426 "sequence_count": 2048, 00:22:05.426 "buf_count": 2048 00:22:05.426 } 00:22:05.426 } 00:22:05.426 ] 00:22:05.426 }, 00:22:05.426 { 00:22:05.426 "subsystem": "bdev", 00:22:05.426 "config": [ 00:22:05.426 { 00:22:05.426 "method": "bdev_set_options", 00:22:05.426 "params": { 00:22:05.426 "bdev_io_pool_size": 65535, 00:22:05.426 "bdev_io_cache_size": 256, 00:22:05.426 "bdev_auto_examine": true, 00:22:05.426 "iobuf_small_cache_size": 128, 00:22:05.426 "iobuf_large_cache_size": 16 00:22:05.426 } 00:22:05.426 }, 00:22:05.426 { 00:22:05.426 "method": "bdev_raid_set_options", 00:22:05.426 "params": { 00:22:05.426 "process_window_size_kb": 1024 00:22:05.426 } 00:22:05.426 }, 00:22:05.426 { 00:22:05.426 "method": "bdev_iscsi_set_options", 00:22:05.426 "params": { 00:22:05.426 "timeout_sec": 30 00:22:05.426 } 00:22:05.426 }, 00:22:05.426 { 00:22:05.426 "method": "bdev_nvme_set_options", 00:22:05.426 "params": { 00:22:05.426 "action_on_timeout": "none", 00:22:05.426 "timeout_us": 0, 00:22:05.426 "timeout_admin_us": 0, 00:22:05.426 "keep_alive_timeout_ms": 10000, 00:22:05.426 "arbitration_burst": 0, 00:22:05.427 "low_priority_weight": 0, 00:22:05.427 "medium_priority_weight": 0, 00:22:05.427 "high_priority_weight": 0, 00:22:05.427 "nvme_adminq_poll_period_us": 10000, 00:22:05.427 "nvme_ioq_poll_period_us": 0, 00:22:05.427 "io_queue_requests": 0, 00:22:05.427 "delay_cmd_submit": true, 00:22:05.427 "transport_retry_count": 4, 00:22:05.427 "bdev_retry_count": 3, 00:22:05.427 "transport_ack_timeout": 0, 00:22:05.427 "ctrlr_loss_timeout_sec": 0, 00:22:05.427 "reconnect_delay_sec": 0, 00:22:05.427 "fast_io_fail_timeout_sec": 0, 00:22:05.427 "disable_auto_failback": false, 00:22:05.427 "generate_uuids": false, 00:22:05.427 "transport_tos": 0, 00:22:05.427 "nvme_error_stat": false, 00:22:05.427 "rdma_srq_size": 0, 00:22:05.427 "io_path_stat": false, 00:22:05.427 "allow_accel_sequence": false, 00:22:05.427 "rdma_max_cq_size": 0, 00:22:05.427 "rdma_cm_event_timeout_ms": 0, 00:22:05.427 "dhchap_digests": [ 00:22:05.427 "sha256", 00:22:05.427 "sha384", 00:22:05.427 "sha512" 00:22:05.427 ], 00:22:05.427 "dhchap_dhgroups": [ 00:22:05.427 "null", 00:22:05.427 "ffdhe2048", 00:22:05.427 "ffdhe3072", 00:22:05.427 "ffdhe4096", 00:22:05.427 "ffdhe6144", 00:22:05.427 "ffdhe8192" 00:22:05.427 ] 00:22:05.427 } 00:22:05.427 }, 00:22:05.427 { 00:22:05.427 "method": "bdev_nvme_set_hotplug", 00:22:05.427 "params": { 00:22:05.427 "period_us": 100000, 00:22:05.427 "enable": false 00:22:05.427 } 00:22:05.427 }, 00:22:05.427 { 00:22:05.427 "method": "bdev_wait_for_examine" 00:22:05.427 } 00:22:05.427 ] 00:22:05.427 }, 00:22:05.427 { 00:22:05.427 "subsystem": "scsi", 00:22:05.427 "config": null 00:22:05.427 }, 00:22:05.427 { 00:22:05.427 "subsystem": "scheduler", 00:22:05.427 "config": [ 00:22:05.427 { 00:22:05.427 "method": "framework_set_scheduler", 00:22:05.427 "params": { 00:22:05.427 "name": "static" 00:22:05.427 } 00:22:05.427 } 00:22:05.427 ] 00:22:05.427 }, 00:22:05.427 { 00:22:05.427 "subsystem": "vhost_scsi", 00:22:05.427 "config": [] 00:22:05.427 }, 00:22:05.427 { 00:22:05.427 "subsystem": "vhost_blk", 00:22:05.427 "config": [] 00:22:05.427 }, 00:22:05.427 { 00:22:05.427 "subsystem": "ublk", 00:22:05.427 "config": [] 00:22:05.427 }, 00:22:05.427 { 00:22:05.427 "subsystem": "nbd", 00:22:05.427 "config": [] 00:22:05.427 }, 00:22:05.427 { 00:22:05.427 "subsystem": "nvmf", 00:22:05.427 "config": [ 00:22:05.427 { 00:22:05.427 "method": "nvmf_set_config", 00:22:05.427 "params": { 00:22:05.427 "discovery_filter": "match_any", 00:22:05.427 "admin_cmd_passthru": { 00:22:05.427 "identify_ctrlr": false 00:22:05.427 } 00:22:05.427 } 00:22:05.427 }, 00:22:05.427 { 00:22:05.427 "method": "nvmf_set_max_subsystems", 00:22:05.427 "params": { 00:22:05.427 "max_subsystems": 1024 00:22:05.427 } 00:22:05.427 }, 00:22:05.427 { 00:22:05.427 "method": "nvmf_set_crdt", 00:22:05.427 "params": { 00:22:05.427 "crdt1": 0, 00:22:05.427 "crdt2": 0, 00:22:05.427 "crdt3": 0 00:22:05.427 } 00:22:05.427 }, 00:22:05.427 { 00:22:05.427 "method": "nvmf_create_transport", 00:22:05.427 "params": { 00:22:05.427 "trtype": "TCP", 00:22:05.427 "max_queue_depth": 128, 00:22:05.427 "max_io_qpairs_per_ctrlr": 127, 00:22:05.427 "in_capsule_data_size": 4096, 00:22:05.427 "max_io_size": 131072, 00:22:05.427 "io_unit_size": 131072, 00:22:05.427 "max_aq_depth": 128, 00:22:05.427 "num_shared_buffers": 511, 00:22:05.427 "buf_cache_size": 4294967295, 00:22:05.427 "dif_insert_or_strip": false, 00:22:05.427 "zcopy": false, 00:22:05.427 "c2h_success": true, 00:22:05.427 "sock_priority": 0, 00:22:05.427 "abort_timeout_sec": 1, 00:22:05.427 "ack_timeout": 0, 00:22:05.427 "data_wr_pool_size": 0 00:22:05.427 } 00:22:05.427 } 00:22:05.427 ] 00:22:05.427 }, 00:22:05.427 { 00:22:05.427 "subsystem": "iscsi", 00:22:05.427 "config": [ 00:22:05.427 { 00:22:05.427 "method": "iscsi_set_options", 00:22:05.427 "params": { 00:22:05.427 "node_base": "iqn.2016-06.io.spdk", 00:22:05.427 "max_sessions": 128, 00:22:05.427 "max_connections_per_session": 2, 00:22:05.427 "max_queue_depth": 64, 00:22:05.427 "default_time2wait": 2, 00:22:05.427 "default_time2retain": 20, 00:22:05.427 "first_burst_length": 8192, 00:22:05.427 "immediate_data": true, 00:22:05.427 "allow_duplicated_isid": false, 00:22:05.427 "error_recovery_level": 0, 00:22:05.427 "nop_timeout": 60, 00:22:05.427 "nop_in_interval": 30, 00:22:05.427 "disable_chap": false, 00:22:05.427 "require_chap": false, 00:22:05.427 "mutual_chap": false, 00:22:05.427 "chap_group": 0, 00:22:05.427 "max_large_datain_per_connection": 64, 00:22:05.427 "max_r2t_per_connection": 4, 00:22:05.427 "pdu_pool_size": 36864, 00:22:05.427 "immediate_data_pool_size": 16384, 00:22:05.427 "data_out_pool_size": 2048 00:22:05.427 } 00:22:05.427 } 00:22:05.427 ] 00:22:05.427 } 00:22:05.427 ] 00:22:05.427 } 00:22:05.427 07:30:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:22:05.427 07:30:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 62421 00:22:05.427 07:30:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 62421 ']' 00:22:05.427 07:30:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 62421 00:22:05.427 07:30:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:22:05.427 07:30:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:05.427 07:30:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62421 00:22:05.427 07:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:05.427 07:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:05.427 killing process with pid 62421 00:22:05.427 07:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62421' 00:22:05.427 07:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 62421 00:22:05.427 07:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 62421 00:22:08.024 07:30:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=62477 00:22:08.024 07:30:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:22:08.024 07:30:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:22:13.281 07:30:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 62477 00:22:13.281 07:30:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 62477 ']' 00:22:13.281 07:30:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 62477 00:22:13.281 07:30:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:22:13.281 07:30:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:13.281 07:30:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62477 00:22:13.281 07:30:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:13.281 07:30:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:13.281 killing process with pid 62477 00:22:13.281 07:30:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62477' 00:22:13.281 07:30:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 62477 00:22:13.281 07:30:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 62477 00:22:15.814 07:30:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:22:15.814 07:30:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:22:15.814 00:22:15.814 real 0m11.697s 00:22:15.814 user 0m10.865s 00:22:15.814 sys 0m1.207s 00:22:15.814 07:30:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:15.814 ************************************ 00:22:15.814 END TEST skip_rpc_with_json 00:22:15.814 ************************************ 00:22:15.814 07:30:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:22:15.814 07:30:54 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:22:15.814 07:30:54 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:22:15.814 07:30:54 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:15.814 07:30:54 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:15.814 07:30:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:15.814 ************************************ 00:22:15.814 START TEST skip_rpc_with_delay 00:22:15.814 ************************************ 00:22:15.814 07:30:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:22:15.814 07:30:54 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:22:15.814 07:30:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:22:15.814 07:30:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:22:15.814 07:30:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:15.814 07:30:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:15.814 07:30:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:15.814 07:30:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:15.814 07:30:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:15.814 07:30:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:15.814 07:30:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:15.814 07:30:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:22:15.814 07:30:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:22:15.814 [2024-07-15 07:30:54.205700] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:22:15.814 [2024-07-15 07:30:54.205932] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:22:15.814 07:30:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:22:15.814 07:30:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:15.814 07:30:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:15.814 07:30:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:15.814 00:22:15.814 real 0m0.181s 00:22:15.814 user 0m0.096s 00:22:15.814 sys 0m0.083s 00:22:15.814 07:30:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:15.814 ************************************ 00:22:15.814 END TEST skip_rpc_with_delay 00:22:15.814 ************************************ 00:22:15.814 07:30:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:22:15.814 07:30:54 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:22:15.814 07:30:54 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:22:15.814 07:30:54 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:22:15.814 07:30:54 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:22:15.814 07:30:54 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:15.814 07:30:54 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:15.814 07:30:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:15.814 ************************************ 00:22:15.814 START TEST exit_on_failed_rpc_init 00:22:15.814 ************************************ 00:22:15.814 07:30:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:22:15.814 07:30:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=62605 00:22:15.814 07:30:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 62605 00:22:15.814 07:30:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 62605 ']' 00:22:15.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.814 07:30:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.814 07:30:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:15.814 07:30:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:15.814 07:30:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.814 07:30:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:15.814 07:30:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:22:16.073 [2024-07-15 07:30:54.443344] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:22:16.073 [2024-07-15 07:30:54.443541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62605 ] 00:22:16.073 [2024-07-15 07:30:54.615705] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.640 [2024-07-15 07:30:54.946718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.574 07:30:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:17.574 07:30:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:22:17.574 07:30:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:22:17.574 07:30:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:22:17.574 07:30:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:22:17.574 07:30:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:22:17.574 07:30:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:17.574 07:30:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:17.574 07:30:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:17.574 07:30:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:17.574 07:30:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:17.574 07:30:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:17.574 07:30:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:17.574 07:30:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:22:17.574 07:30:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:22:17.574 [2024-07-15 07:30:55.982693] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:22:17.574 [2024-07-15 07:30:55.982906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62634 ] 00:22:17.574 [2024-07-15 07:30:56.163607] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.831 [2024-07-15 07:30:56.430507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.831 [2024-07-15 07:30:56.430671] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:22:17.831 [2024-07-15 07:30:56.430697] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:22:17.831 [2024-07-15 07:30:56.430717] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:18.397 07:30:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:22:18.397 07:30:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:18.397 07:30:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:22:18.397 07:30:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:22:18.397 07:30:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:22:18.397 07:30:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:18.397 07:30:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:18.397 07:30:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 62605 00:22:18.397 07:30:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 62605 ']' 00:22:18.397 07:30:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 62605 00:22:18.397 07:30:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:22:18.397 07:30:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:18.397 07:30:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62605 00:22:18.397 07:30:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:18.397 killing process with pid 62605 00:22:18.397 07:30:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:18.397 07:30:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62605' 00:22:18.397 07:30:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 62605 00:22:18.397 07:30:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 62605 00:22:20.929 00:22:20.929 real 0m5.103s 00:22:20.929 user 0m5.725s 00:22:20.929 sys 0m0.819s 00:22:20.929 ************************************ 00:22:20.929 END TEST exit_on_failed_rpc_init 00:22:20.929 ************************************ 00:22:20.929 07:30:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:20.929 07:30:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:22:20.929 07:30:59 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:22:20.929 07:30:59 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:22:20.929 00:22:20.929 real 0m24.818s 00:22:20.929 user 0m23.637s 00:22:20.929 sys 0m2.862s 00:22:20.929 07:30:59 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:20.929 07:30:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:20.929 ************************************ 00:22:20.929 END TEST skip_rpc 00:22:20.929 ************************************ 00:22:20.929 07:30:59 -- common/autotest_common.sh@1142 -- # return 0 00:22:20.929 07:30:59 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:22:20.929 07:30:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:20.929 07:30:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:20.929 07:30:59 -- common/autotest_common.sh@10 -- # set +x 00:22:20.929 ************************************ 00:22:20.929 START TEST rpc_client 00:22:20.929 ************************************ 00:22:20.929 07:30:59 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:22:21.188 * Looking for test storage... 00:22:21.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:22:21.188 07:30:59 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:22:21.188 OK 00:22:21.188 07:30:59 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:22:21.188 00:22:21.188 real 0m0.148s 00:22:21.188 user 0m0.061s 00:22:21.188 sys 0m0.093s 00:22:21.188 07:30:59 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:21.188 ************************************ 00:22:21.188 END TEST rpc_client 00:22:21.188 ************************************ 00:22:21.188 07:30:59 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:22:21.188 07:30:59 -- common/autotest_common.sh@1142 -- # return 0 00:22:21.188 07:30:59 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:22:21.188 07:30:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:21.188 07:30:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:21.188 07:30:59 -- common/autotest_common.sh@10 -- # set +x 00:22:21.188 ************************************ 00:22:21.188 START TEST json_config 00:22:21.188 ************************************ 00:22:21.188 07:30:59 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:22:21.188 07:30:59 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:21.188 07:30:59 json_config -- nvmf/common.sh@7 -- # uname -s 00:22:21.188 07:30:59 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:21.188 07:30:59 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:21.188 07:30:59 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:21.188 07:30:59 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:21.188 07:30:59 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:21.188 07:30:59 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:21.188 07:30:59 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:21.188 07:30:59 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:21.188 07:30:59 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:21.188 07:30:59 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:21.188 07:30:59 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2f109166-b1ec-48bc-8a74-71b6d6599bfb 00:22:21.188 07:30:59 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=2f109166-b1ec-48bc-8a74-71b6d6599bfb 00:22:21.188 07:30:59 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:21.188 07:30:59 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:21.188 07:30:59 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:21.188 07:30:59 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:21.188 07:30:59 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:21.188 07:30:59 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:21.188 07:30:59 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:21.188 07:30:59 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:21.188 07:30:59 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.188 07:30:59 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.188 07:30:59 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.188 07:30:59 json_config -- paths/export.sh@5 -- # export PATH 00:22:21.188 07:30:59 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.188 07:30:59 json_config -- nvmf/common.sh@47 -- # : 0 00:22:21.188 07:30:59 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:21.188 07:30:59 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:21.188 07:30:59 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:21.188 07:30:59 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:21.188 07:30:59 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:21.188 07:30:59 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:21.188 07:30:59 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:21.188 07:30:59 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:21.188 07:30:59 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:22:21.447 07:30:59 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:22:21.447 07:30:59 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:22:21.447 WARNING: No tests are enabled so not running JSON configuration tests 00:22:21.447 07:30:59 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:22:21.447 07:30:59 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:22:21.447 07:30:59 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:22:21.447 07:30:59 json_config -- json_config/json_config.sh@28 -- # exit 0 00:22:21.447 00:22:21.447 real 0m0.075s 00:22:21.447 user 0m0.033s 00:22:21.447 sys 0m0.041s 00:22:21.447 07:30:59 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:21.447 07:30:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:22:21.447 ************************************ 00:22:21.447 END TEST json_config 00:22:21.447 ************************************ 00:22:21.447 07:30:59 -- common/autotest_common.sh@1142 -- # return 0 00:22:21.447 07:30:59 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:22:21.447 07:30:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:21.447 07:30:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:21.447 07:30:59 -- common/autotest_common.sh@10 -- # set +x 00:22:21.447 ************************************ 00:22:21.447 START TEST json_config_extra_key 00:22:21.447 ************************************ 00:22:21.447 07:30:59 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:22:21.447 07:30:59 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:21.447 07:30:59 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:22:21.447 07:30:59 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:21.447 07:30:59 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:21.447 07:30:59 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:21.447 07:30:59 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:21.447 07:30:59 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:21.447 07:30:59 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:21.447 07:30:59 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:21.447 07:30:59 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:21.447 07:30:59 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:21.447 07:30:59 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:21.447 07:30:59 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2f109166-b1ec-48bc-8a74-71b6d6599bfb 00:22:21.447 07:30:59 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=2f109166-b1ec-48bc-8a74-71b6d6599bfb 00:22:21.447 07:30:59 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:21.447 07:30:59 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:21.448 07:30:59 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:21.448 07:30:59 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:21.448 07:30:59 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:21.448 07:30:59 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:21.448 07:30:59 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:21.448 07:30:59 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:21.448 07:30:59 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.448 07:30:59 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.448 07:30:59 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.448 07:30:59 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:22:21.448 07:30:59 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.448 07:30:59 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:22:21.448 07:30:59 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:21.448 07:30:59 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:21.448 07:30:59 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:21.448 07:30:59 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:21.448 07:30:59 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:21.448 07:30:59 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:21.448 07:30:59 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:21.448 07:30:59 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:21.448 07:30:59 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:22:21.448 07:30:59 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:22:21.448 07:30:59 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:22:21.448 07:30:59 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:22:21.448 07:30:59 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:22:21.448 07:30:59 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:22:21.448 07:30:59 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:22:21.448 07:30:59 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:22:21.448 07:30:59 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:22:21.448 07:30:59 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:22:21.448 INFO: launching applications... 00:22:21.448 Waiting for target to run... 00:22:21.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:22:21.448 07:30:59 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:22:21.448 07:30:59 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:22:21.448 07:30:59 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:22:21.448 07:30:59 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:22:21.448 07:30:59 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:22:21.448 07:30:59 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:22:21.448 07:30:59 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:22:21.448 07:30:59 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:22:21.448 07:30:59 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:22:21.448 07:30:59 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=62820 00:22:21.448 07:30:59 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:22:21.448 07:30:59 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 62820 /var/tmp/spdk_tgt.sock 00:22:21.448 07:30:59 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 62820 ']' 00:22:21.448 07:30:59 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:22:21.448 07:30:59 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:21.448 07:30:59 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:22:21.448 07:30:59 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:21.448 07:30:59 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:22:21.448 07:30:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:22:21.707 [2024-07-15 07:31:00.061179] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:22:21.707 [2024-07-15 07:31:00.061381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62820 ] 00:22:22.274 [2024-07-15 07:31:00.647798] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.533 [2024-07-15 07:31:00.908166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.467 07:31:01 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:23.467 00:22:23.467 07:31:01 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:22:23.467 07:31:01 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:22:23.467 INFO: shutting down applications... 00:22:23.467 07:31:01 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:22:23.467 07:31:01 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:22:23.467 07:31:01 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:22:23.467 07:31:01 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:22:23.467 07:31:01 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 62820 ]] 00:22:23.467 07:31:01 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 62820 00:22:23.467 07:31:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:22:23.467 07:31:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:22:23.467 07:31:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62820 00:22:23.467 07:31:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:22:23.726 07:31:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:22:23.726 07:31:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:22:23.726 07:31:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62820 00:22:23.726 07:31:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:22:24.291 07:31:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:22:24.291 07:31:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:22:24.291 07:31:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62820 00:22:24.291 07:31:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:22:24.855 07:31:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:22:24.855 07:31:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:22:24.855 07:31:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62820 00:22:24.856 07:31:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:22:25.421 07:31:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:22:25.421 07:31:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:22:25.421 07:31:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62820 00:22:25.421 07:31:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:22:25.679 07:31:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:22:25.679 07:31:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:22:25.679 07:31:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62820 00:22:25.679 07:31:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:22:26.246 07:31:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:22:26.246 07:31:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:22:26.246 07:31:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62820 00:22:26.246 07:31:04 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:22:26.246 07:31:04 json_config_extra_key -- json_config/common.sh@43 -- # break 00:22:26.246 SPDK target shutdown done 00:22:26.247 07:31:04 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:22:26.247 07:31:04 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:22:26.247 Success 00:22:26.247 07:31:04 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:22:26.247 00:22:26.247 real 0m4.907s 00:22:26.247 user 0m4.625s 00:22:26.247 sys 0m0.804s 00:22:26.247 07:31:04 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:26.247 07:31:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:22:26.247 ************************************ 00:22:26.247 END TEST json_config_extra_key 00:22:26.247 ************************************ 00:22:26.247 07:31:04 -- common/autotest_common.sh@1142 -- # return 0 00:22:26.247 07:31:04 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:22:26.247 07:31:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:26.247 07:31:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:26.247 07:31:04 -- common/autotest_common.sh@10 -- # set +x 00:22:26.247 ************************************ 00:22:26.247 START TEST alias_rpc 00:22:26.247 ************************************ 00:22:26.247 07:31:04 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:22:26.513 * Looking for test storage... 00:22:26.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:22:26.513 07:31:04 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:22:26.513 07:31:04 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=62931 00:22:26.513 07:31:04 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 62931 00:22:26.513 07:31:04 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:26.513 07:31:04 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 62931 ']' 00:22:26.513 07:31:04 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.513 07:31:04 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:26.513 07:31:04 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.513 07:31:04 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:26.513 07:31:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:26.513 [2024-07-15 07:31:05.023899] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:22:26.513 [2024-07-15 07:31:05.024953] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62931 ] 00:22:26.772 [2024-07-15 07:31:05.226171] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.030 [2024-07-15 07:31:05.516994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.964 07:31:06 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:27.964 07:31:06 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:22:27.964 07:31:06 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:22:28.222 07:31:06 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 62931 00:22:28.223 07:31:06 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 62931 ']' 00:22:28.223 07:31:06 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 62931 00:22:28.223 07:31:06 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:22:28.223 07:31:06 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:28.223 07:31:06 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62931 00:22:28.223 07:31:06 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:28.223 killing process with pid 62931 00:22:28.223 07:31:06 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:28.223 07:31:06 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62931' 00:22:28.223 07:31:06 alias_rpc -- common/autotest_common.sh@967 -- # kill 62931 00:22:28.223 07:31:06 alias_rpc -- common/autotest_common.sh@972 -- # wait 62931 00:22:30.795 00:22:30.795 real 0m4.406s 00:22:30.795 user 0m4.325s 00:22:30.795 sys 0m0.715s 00:22:30.795 ************************************ 00:22:30.795 END TEST alias_rpc 00:22:30.795 ************************************ 00:22:30.795 07:31:09 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:30.795 07:31:09 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:30.795 07:31:09 -- common/autotest_common.sh@1142 -- # return 0 00:22:30.795 07:31:09 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:22:30.795 07:31:09 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:22:30.796 07:31:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:30.796 07:31:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:30.796 07:31:09 -- common/autotest_common.sh@10 -- # set +x 00:22:30.796 ************************************ 00:22:30.796 START TEST spdkcli_tcp 00:22:30.796 ************************************ 00:22:30.796 07:31:09 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:22:30.796 * Looking for test storage... 00:22:30.796 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:22:30.796 07:31:09 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:22:30.796 07:31:09 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:22:30.796 07:31:09 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:22:30.796 07:31:09 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:22:30.796 07:31:09 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:22:30.796 07:31:09 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:30.796 07:31:09 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:22:30.796 07:31:09 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:30.796 07:31:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:30.796 07:31:09 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=63030 00:22:30.796 07:31:09 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:22:30.796 07:31:09 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 63030 00:22:30.796 07:31:09 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 63030 ']' 00:22:30.796 07:31:09 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.796 07:31:09 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:30.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.796 07:31:09 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.796 07:31:09 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:30.796 07:31:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:31.054 [2024-07-15 07:31:09.471649] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:22:31.054 [2024-07-15 07:31:09.471834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63030 ] 00:22:31.054 [2024-07-15 07:31:09.639842] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:31.313 [2024-07-15 07:31:09.918621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.313 [2024-07-15 07:31:09.918633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.258 07:31:10 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:32.258 07:31:10 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:22:32.258 07:31:10 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=63047 00:22:32.258 07:31:10 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:22:32.258 07:31:10 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:22:32.516 [ 00:22:32.516 "bdev_malloc_delete", 00:22:32.516 "bdev_malloc_create", 00:22:32.516 "bdev_null_resize", 00:22:32.516 "bdev_null_delete", 00:22:32.516 "bdev_null_create", 00:22:32.516 "bdev_nvme_cuse_unregister", 00:22:32.516 "bdev_nvme_cuse_register", 00:22:32.516 "bdev_opal_new_user", 00:22:32.516 "bdev_opal_set_lock_state", 00:22:32.516 "bdev_opal_delete", 00:22:32.516 "bdev_opal_get_info", 00:22:32.516 "bdev_opal_create", 00:22:32.516 "bdev_nvme_opal_revert", 00:22:32.516 "bdev_nvme_opal_init", 00:22:32.516 "bdev_nvme_send_cmd", 00:22:32.516 "bdev_nvme_get_path_iostat", 00:22:32.516 "bdev_nvme_get_mdns_discovery_info", 00:22:32.516 "bdev_nvme_stop_mdns_discovery", 00:22:32.516 "bdev_nvme_start_mdns_discovery", 00:22:32.516 "bdev_nvme_set_multipath_policy", 00:22:32.516 "bdev_nvme_set_preferred_path", 00:22:32.516 "bdev_nvme_get_io_paths", 00:22:32.516 "bdev_nvme_remove_error_injection", 00:22:32.516 "bdev_nvme_add_error_injection", 00:22:32.516 "bdev_nvme_get_discovery_info", 00:22:32.516 "bdev_nvme_stop_discovery", 00:22:32.516 "bdev_nvme_start_discovery", 00:22:32.516 "bdev_nvme_get_controller_health_info", 00:22:32.516 "bdev_nvme_disable_controller", 00:22:32.516 "bdev_nvme_enable_controller", 00:22:32.516 "bdev_nvme_reset_controller", 00:22:32.516 "bdev_nvme_get_transport_statistics", 00:22:32.516 "bdev_nvme_apply_firmware", 00:22:32.516 "bdev_nvme_detach_controller", 00:22:32.516 "bdev_nvme_get_controllers", 00:22:32.516 "bdev_nvme_attach_controller", 00:22:32.516 "bdev_nvme_set_hotplug", 00:22:32.516 "bdev_nvme_set_options", 00:22:32.516 "bdev_passthru_delete", 00:22:32.516 "bdev_passthru_create", 00:22:32.516 "bdev_lvol_set_parent_bdev", 00:22:32.516 "bdev_lvol_set_parent", 00:22:32.516 "bdev_lvol_check_shallow_copy", 00:22:32.516 "bdev_lvol_start_shallow_copy", 00:22:32.516 "bdev_lvol_grow_lvstore", 00:22:32.516 "bdev_lvol_get_lvols", 00:22:32.516 "bdev_lvol_get_lvstores", 00:22:32.516 "bdev_lvol_delete", 00:22:32.516 "bdev_lvol_set_read_only", 00:22:32.516 "bdev_lvol_resize", 00:22:32.516 "bdev_lvol_decouple_parent", 00:22:32.516 "bdev_lvol_inflate", 00:22:32.516 "bdev_lvol_rename", 00:22:32.516 "bdev_lvol_clone_bdev", 00:22:32.516 "bdev_lvol_clone", 00:22:32.516 "bdev_lvol_snapshot", 00:22:32.516 "bdev_lvol_create", 00:22:32.516 "bdev_lvol_delete_lvstore", 00:22:32.516 "bdev_lvol_rename_lvstore", 00:22:32.516 "bdev_lvol_create_lvstore", 00:22:32.516 "bdev_raid_set_options", 00:22:32.516 "bdev_raid_remove_base_bdev", 00:22:32.516 "bdev_raid_add_base_bdev", 00:22:32.516 "bdev_raid_delete", 00:22:32.516 "bdev_raid_create", 00:22:32.516 "bdev_raid_get_bdevs", 00:22:32.516 "bdev_error_inject_error", 00:22:32.516 "bdev_error_delete", 00:22:32.516 "bdev_error_create", 00:22:32.516 "bdev_split_delete", 00:22:32.516 "bdev_split_create", 00:22:32.516 "bdev_delay_delete", 00:22:32.516 "bdev_delay_create", 00:22:32.516 "bdev_delay_update_latency", 00:22:32.516 "bdev_zone_block_delete", 00:22:32.516 "bdev_zone_block_create", 00:22:32.516 "blobfs_create", 00:22:32.516 "blobfs_detect", 00:22:32.516 "blobfs_set_cache_size", 00:22:32.516 "bdev_xnvme_delete", 00:22:32.516 "bdev_xnvme_create", 00:22:32.516 "bdev_aio_delete", 00:22:32.516 "bdev_aio_rescan", 00:22:32.516 "bdev_aio_create", 00:22:32.516 "bdev_ftl_set_property", 00:22:32.516 "bdev_ftl_get_properties", 00:22:32.516 "bdev_ftl_get_stats", 00:22:32.516 "bdev_ftl_unmap", 00:22:32.516 "bdev_ftl_unload", 00:22:32.516 "bdev_ftl_delete", 00:22:32.516 "bdev_ftl_load", 00:22:32.516 "bdev_ftl_create", 00:22:32.516 "bdev_virtio_attach_controller", 00:22:32.516 "bdev_virtio_scsi_get_devices", 00:22:32.516 "bdev_virtio_detach_controller", 00:22:32.516 "bdev_virtio_blk_set_hotplug", 00:22:32.516 "bdev_iscsi_delete", 00:22:32.516 "bdev_iscsi_create", 00:22:32.516 "bdev_iscsi_set_options", 00:22:32.516 "accel_error_inject_error", 00:22:32.516 "ioat_scan_accel_module", 00:22:32.516 "dsa_scan_accel_module", 00:22:32.516 "iaa_scan_accel_module", 00:22:32.516 "keyring_file_remove_key", 00:22:32.516 "keyring_file_add_key", 00:22:32.516 "keyring_linux_set_options", 00:22:32.516 "iscsi_get_histogram", 00:22:32.516 "iscsi_enable_histogram", 00:22:32.516 "iscsi_set_options", 00:22:32.516 "iscsi_get_auth_groups", 00:22:32.516 "iscsi_auth_group_remove_secret", 00:22:32.516 "iscsi_auth_group_add_secret", 00:22:32.516 "iscsi_delete_auth_group", 00:22:32.516 "iscsi_create_auth_group", 00:22:32.516 "iscsi_set_discovery_auth", 00:22:32.516 "iscsi_get_options", 00:22:32.516 "iscsi_target_node_request_logout", 00:22:32.516 "iscsi_target_node_set_redirect", 00:22:32.516 "iscsi_target_node_set_auth", 00:22:32.516 "iscsi_target_node_add_lun", 00:22:32.516 "iscsi_get_stats", 00:22:32.516 "iscsi_get_connections", 00:22:32.516 "iscsi_portal_group_set_auth", 00:22:32.516 "iscsi_start_portal_group", 00:22:32.516 "iscsi_delete_portal_group", 00:22:32.516 "iscsi_create_portal_group", 00:22:32.516 "iscsi_get_portal_groups", 00:22:32.516 "iscsi_delete_target_node", 00:22:32.516 "iscsi_target_node_remove_pg_ig_maps", 00:22:32.516 "iscsi_target_node_add_pg_ig_maps", 00:22:32.517 "iscsi_create_target_node", 00:22:32.517 "iscsi_get_target_nodes", 00:22:32.517 "iscsi_delete_initiator_group", 00:22:32.517 "iscsi_initiator_group_remove_initiators", 00:22:32.517 "iscsi_initiator_group_add_initiators", 00:22:32.517 "iscsi_create_initiator_group", 00:22:32.517 "iscsi_get_initiator_groups", 00:22:32.517 "nvmf_set_crdt", 00:22:32.517 "nvmf_set_config", 00:22:32.517 "nvmf_set_max_subsystems", 00:22:32.517 "nvmf_stop_mdns_prr", 00:22:32.517 "nvmf_publish_mdns_prr", 00:22:32.517 "nvmf_subsystem_get_listeners", 00:22:32.517 "nvmf_subsystem_get_qpairs", 00:22:32.517 "nvmf_subsystem_get_controllers", 00:22:32.517 "nvmf_get_stats", 00:22:32.517 "nvmf_get_transports", 00:22:32.517 "nvmf_create_transport", 00:22:32.517 "nvmf_get_targets", 00:22:32.517 "nvmf_delete_target", 00:22:32.517 "nvmf_create_target", 00:22:32.517 "nvmf_subsystem_allow_any_host", 00:22:32.517 "nvmf_subsystem_remove_host", 00:22:32.517 "nvmf_subsystem_add_host", 00:22:32.517 "nvmf_ns_remove_host", 00:22:32.517 "nvmf_ns_add_host", 00:22:32.517 "nvmf_subsystem_remove_ns", 00:22:32.517 "nvmf_subsystem_add_ns", 00:22:32.517 "nvmf_subsystem_listener_set_ana_state", 00:22:32.517 "nvmf_discovery_get_referrals", 00:22:32.517 "nvmf_discovery_remove_referral", 00:22:32.517 "nvmf_discovery_add_referral", 00:22:32.517 "nvmf_subsystem_remove_listener", 00:22:32.517 "nvmf_subsystem_add_listener", 00:22:32.517 "nvmf_delete_subsystem", 00:22:32.517 "nvmf_create_subsystem", 00:22:32.517 "nvmf_get_subsystems", 00:22:32.517 "env_dpdk_get_mem_stats", 00:22:32.517 "nbd_get_disks", 00:22:32.517 "nbd_stop_disk", 00:22:32.517 "nbd_start_disk", 00:22:32.517 "ublk_recover_disk", 00:22:32.517 "ublk_get_disks", 00:22:32.517 "ublk_stop_disk", 00:22:32.517 "ublk_start_disk", 00:22:32.517 "ublk_destroy_target", 00:22:32.517 "ublk_create_target", 00:22:32.517 "virtio_blk_create_transport", 00:22:32.517 "virtio_blk_get_transports", 00:22:32.517 "vhost_controller_set_coalescing", 00:22:32.517 "vhost_get_controllers", 00:22:32.517 "vhost_delete_controller", 00:22:32.517 "vhost_create_blk_controller", 00:22:32.517 "vhost_scsi_controller_remove_target", 00:22:32.517 "vhost_scsi_controller_add_target", 00:22:32.517 "vhost_start_scsi_controller", 00:22:32.517 "vhost_create_scsi_controller", 00:22:32.517 "thread_set_cpumask", 00:22:32.517 "framework_get_governor", 00:22:32.517 "framework_get_scheduler", 00:22:32.517 "framework_set_scheduler", 00:22:32.517 "framework_get_reactors", 00:22:32.517 "thread_get_io_channels", 00:22:32.517 "thread_get_pollers", 00:22:32.517 "thread_get_stats", 00:22:32.517 "framework_monitor_context_switch", 00:22:32.517 "spdk_kill_instance", 00:22:32.517 "log_enable_timestamps", 00:22:32.517 "log_get_flags", 00:22:32.517 "log_clear_flag", 00:22:32.517 "log_set_flag", 00:22:32.517 "log_get_level", 00:22:32.517 "log_set_level", 00:22:32.517 "log_get_print_level", 00:22:32.517 "log_set_print_level", 00:22:32.517 "framework_enable_cpumask_locks", 00:22:32.517 "framework_disable_cpumask_locks", 00:22:32.517 "framework_wait_init", 00:22:32.517 "framework_start_init", 00:22:32.517 "scsi_get_devices", 00:22:32.517 "bdev_get_histogram", 00:22:32.517 "bdev_enable_histogram", 00:22:32.517 "bdev_set_qos_limit", 00:22:32.517 "bdev_set_qd_sampling_period", 00:22:32.517 "bdev_get_bdevs", 00:22:32.517 "bdev_reset_iostat", 00:22:32.517 "bdev_get_iostat", 00:22:32.517 "bdev_examine", 00:22:32.517 "bdev_wait_for_examine", 00:22:32.517 "bdev_set_options", 00:22:32.517 "notify_get_notifications", 00:22:32.517 "notify_get_types", 00:22:32.517 "accel_get_stats", 00:22:32.517 "accel_set_options", 00:22:32.517 "accel_set_driver", 00:22:32.517 "accel_crypto_key_destroy", 00:22:32.517 "accel_crypto_keys_get", 00:22:32.517 "accel_crypto_key_create", 00:22:32.517 "accel_assign_opc", 00:22:32.517 "accel_get_module_info", 00:22:32.517 "accel_get_opc_assignments", 00:22:32.517 "vmd_rescan", 00:22:32.517 "vmd_remove_device", 00:22:32.517 "vmd_enable", 00:22:32.517 "sock_get_default_impl", 00:22:32.517 "sock_set_default_impl", 00:22:32.517 "sock_impl_set_options", 00:22:32.517 "sock_impl_get_options", 00:22:32.517 "iobuf_get_stats", 00:22:32.517 "iobuf_set_options", 00:22:32.517 "framework_get_pci_devices", 00:22:32.517 "framework_get_config", 00:22:32.517 "framework_get_subsystems", 00:22:32.517 "trace_get_info", 00:22:32.517 "trace_get_tpoint_group_mask", 00:22:32.517 "trace_disable_tpoint_group", 00:22:32.517 "trace_enable_tpoint_group", 00:22:32.517 "trace_clear_tpoint_mask", 00:22:32.517 "trace_set_tpoint_mask", 00:22:32.517 "keyring_get_keys", 00:22:32.517 "spdk_get_version", 00:22:32.517 "rpc_get_methods" 00:22:32.517 ] 00:22:32.517 07:31:11 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:22:32.517 07:31:11 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:32.517 07:31:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:32.775 07:31:11 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:32.775 07:31:11 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 63030 00:22:32.775 07:31:11 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 63030 ']' 00:22:32.775 07:31:11 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 63030 00:22:32.775 07:31:11 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:22:32.775 07:31:11 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:32.775 07:31:11 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63030 00:22:32.775 07:31:11 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:32.775 killing process with pid 63030 00:22:32.775 07:31:11 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:32.775 07:31:11 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63030' 00:22:32.775 07:31:11 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 63030 00:22:32.775 07:31:11 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 63030 00:22:35.301 00:22:35.301 real 0m4.390s 00:22:35.301 user 0m7.693s 00:22:35.301 sys 0m0.730s 00:22:35.301 ************************************ 00:22:35.301 END TEST spdkcli_tcp 00:22:35.301 ************************************ 00:22:35.301 07:31:13 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:35.301 07:31:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:35.301 07:31:13 -- common/autotest_common.sh@1142 -- # return 0 00:22:35.301 07:31:13 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:22:35.301 07:31:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:35.301 07:31:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:35.301 07:31:13 -- common/autotest_common.sh@10 -- # set +x 00:22:35.301 ************************************ 00:22:35.301 START TEST dpdk_mem_utility 00:22:35.301 ************************************ 00:22:35.301 07:31:13 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:22:35.301 * Looking for test storage... 00:22:35.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:22:35.301 07:31:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:22:35.301 07:31:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:35.301 07:31:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=63144 00:22:35.301 07:31:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 63144 00:22:35.301 07:31:13 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 63144 ']' 00:22:35.301 07:31:13 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.301 07:31:13 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:35.301 07:31:13 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.301 07:31:13 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:35.301 07:31:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:22:35.301 [2024-07-15 07:31:13.905046] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:22:35.301 [2024-07-15 07:31:13.905229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63144 ] 00:22:35.558 [2024-07-15 07:31:14.074181] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.815 [2024-07-15 07:31:14.353414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.749 07:31:15 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:36.749 07:31:15 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:22:36.749 07:31:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:22:36.749 07:31:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:22:36.749 07:31:15 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.749 07:31:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:22:36.749 { 00:22:36.749 "filename": "/tmp/spdk_mem_dump.txt" 00:22:36.749 } 00:22:36.749 07:31:15 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.749 07:31:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:22:36.749 DPDK memory size 820.000000 MiB in 1 heap(s) 00:22:36.749 1 heaps totaling size 820.000000 MiB 00:22:36.749 size: 820.000000 MiB heap id: 0 00:22:36.749 end heaps---------- 00:22:36.749 8 mempools totaling size 598.116089 MiB 00:22:36.749 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:22:36.749 size: 158.602051 MiB name: PDU_data_out_Pool 00:22:36.749 size: 84.521057 MiB name: bdev_io_63144 00:22:36.749 size: 51.011292 MiB name: evtpool_63144 00:22:36.749 size: 50.003479 MiB name: msgpool_63144 00:22:36.749 size: 21.763794 MiB name: PDU_Pool 00:22:36.749 size: 19.513306 MiB name: SCSI_TASK_Pool 00:22:36.749 size: 0.026123 MiB name: Session_Pool 00:22:36.749 end mempools------- 00:22:36.749 6 memzones totaling size 4.142822 MiB 00:22:36.749 size: 1.000366 MiB name: RG_ring_0_63144 00:22:36.749 size: 1.000366 MiB name: RG_ring_1_63144 00:22:36.749 size: 1.000366 MiB name: RG_ring_4_63144 00:22:36.749 size: 1.000366 MiB name: RG_ring_5_63144 00:22:36.749 size: 0.125366 MiB name: RG_ring_2_63144 00:22:36.749 size: 0.015991 MiB name: RG_ring_3_63144 00:22:36.749 end memzones------- 00:22:36.749 07:31:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:22:37.007 heap id: 0 total size: 820.000000 MiB number of busy elements: 300 number of free elements: 18 00:22:37.007 list of free elements. size: 18.451538 MiB 00:22:37.007 element at address: 0x200000400000 with size: 1.999451 MiB 00:22:37.007 element at address: 0x200000800000 with size: 1.996887 MiB 00:22:37.007 element at address: 0x200007000000 with size: 1.995972 MiB 00:22:37.007 element at address: 0x20000b200000 with size: 1.995972 MiB 00:22:37.007 element at address: 0x200019100040 with size: 0.999939 MiB 00:22:37.007 element at address: 0x200019500040 with size: 0.999939 MiB 00:22:37.007 element at address: 0x200019600000 with size: 0.999084 MiB 00:22:37.007 element at address: 0x200003e00000 with size: 0.996094 MiB 00:22:37.007 element at address: 0x200032200000 with size: 0.994324 MiB 00:22:37.007 element at address: 0x200018e00000 with size: 0.959656 MiB 00:22:37.007 element at address: 0x200019900040 with size: 0.936401 MiB 00:22:37.007 element at address: 0x200000200000 with size: 0.829956 MiB 00:22:37.007 element at address: 0x20001b000000 with size: 0.564148 MiB 00:22:37.007 element at address: 0x200019200000 with size: 0.487976 MiB 00:22:37.007 element at address: 0x200019a00000 with size: 0.485413 MiB 00:22:37.007 element at address: 0x200013800000 with size: 0.467896 MiB 00:22:37.007 element at address: 0x200028400000 with size: 0.390442 MiB 00:22:37.007 element at address: 0x200003a00000 with size: 0.351990 MiB 00:22:37.007 list of standard malloc elements. size: 199.284058 MiB 00:22:37.007 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:22:37.007 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:22:37.007 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:22:37.007 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:22:37.007 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:22:37.007 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:22:37.007 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:22:37.007 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:22:37.007 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:22:37.007 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:22:37.007 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:22:37.007 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:22:37.007 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:22:37.007 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:22:37.007 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:22:37.007 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:22:37.007 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:22:37.007 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:22:37.007 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:22:37.007 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:22:37.007 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:22:37.007 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:22:37.007 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:22:37.007 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:22:37.007 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:22:37.007 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:22:37.007 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:22:37.007 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:22:37.007 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:22:37.007 element at address: 0x200003aff980 with size: 0.000244 MiB 00:22:37.007 element at address: 0x200003affa80 with size: 0.000244 MiB 00:22:37.007 element at address: 0x200003eff000 with size: 0.000244 MiB 00:22:37.007 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:22:37.007 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:22:37.007 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:22:37.007 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:22:37.007 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:22:37.007 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:22:37.007 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:22:37.007 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:22:37.007 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:22:37.007 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:22:37.007 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:22:37.007 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:22:37.007 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:22:37.007 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:22:37.007 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:22:37.008 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:22:37.008 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:22:37.008 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:22:37.008 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:22:37.008 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:22:37.008 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:22:37.008 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:22:37.008 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:22:37.008 element at address: 0x200013877c80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x200013877d80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x200013877e80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x200013877f80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x200013878080 with size: 0.000244 MiB 00:22:37.008 element at address: 0x200013878180 with size: 0.000244 MiB 00:22:37.008 element at address: 0x200013878280 with size: 0.000244 MiB 00:22:37.008 element at address: 0x200013878380 with size: 0.000244 MiB 00:22:37.008 element at address: 0x200013878480 with size: 0.000244 MiB 00:22:37.008 element at address: 0x200013878580 with size: 0.000244 MiB 00:22:37.008 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:22:37.008 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:22:37.008 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x200019abc680 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:22:37.008 element at address: 0x200028463f40 with size: 0.000244 MiB 00:22:37.008 element at address: 0x200028464040 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846af80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846b080 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846b180 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846b280 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846b380 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846b480 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846b580 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846b680 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846b780 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846b880 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846b980 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846be80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846c080 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846c180 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846c280 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846c380 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846c480 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846c580 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846c680 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846c780 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846c880 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846c980 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846d080 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846d180 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846d280 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846d380 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846d480 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846d580 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846d680 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846d780 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846d880 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846d980 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846da80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846db80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846de80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846df80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846e080 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846e180 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846e280 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846e380 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846e480 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846e580 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846e680 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846e780 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846e880 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846e980 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846f080 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846f180 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846f280 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846f380 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846f480 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846f580 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846f680 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846f780 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846f880 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846f980 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:22:37.008 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:22:37.008 list of memzone associated elements. size: 602.264404 MiB 00:22:37.008 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:22:37.008 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:22:37.008 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:22:37.008 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:22:37.008 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:22:37.008 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_63144_0 00:22:37.008 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:22:37.008 associated memzone info: size: 48.002930 MiB name: MP_evtpool_63144_0 00:22:37.008 element at address: 0x200003fff340 with size: 48.003113 MiB 00:22:37.008 associated memzone info: size: 48.002930 MiB name: MP_msgpool_63144_0 00:22:37.009 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:22:37.009 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:22:37.009 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:22:37.009 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:22:37.009 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:22:37.009 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_63144 00:22:37.009 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:22:37.009 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_63144 00:22:37.009 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:22:37.009 associated memzone info: size: 1.007996 MiB name: MP_evtpool_63144 00:22:37.009 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:22:37.009 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:22:37.009 element at address: 0x200019abc780 with size: 1.008179 MiB 00:22:37.009 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:22:37.009 element at address: 0x200018efde00 with size: 1.008179 MiB 00:22:37.009 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:22:37.009 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:22:37.009 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:22:37.009 element at address: 0x200003eff100 with size: 1.000549 MiB 00:22:37.009 associated memzone info: size: 1.000366 MiB name: RG_ring_0_63144 00:22:37.009 element at address: 0x200003affb80 with size: 1.000549 MiB 00:22:37.009 associated memzone info: size: 1.000366 MiB name: RG_ring_1_63144 00:22:37.009 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:22:37.009 associated memzone info: size: 1.000366 MiB name: RG_ring_4_63144 00:22:37.009 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:22:37.009 associated memzone info: size: 1.000366 MiB name: RG_ring_5_63144 00:22:37.009 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:22:37.009 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_63144 00:22:37.009 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:22:37.009 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:22:37.009 element at address: 0x200013878680 with size: 0.500549 MiB 00:22:37.009 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:22:37.009 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:22:37.009 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:22:37.009 element at address: 0x200003adf740 with size: 0.125549 MiB 00:22:37.009 associated memzone info: size: 0.125366 MiB name: RG_ring_2_63144 00:22:37.009 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:22:37.009 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:22:37.009 element at address: 0x200028464140 with size: 0.023804 MiB 00:22:37.009 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:22:37.009 element at address: 0x200003adb500 with size: 0.016174 MiB 00:22:37.009 associated memzone info: size: 0.015991 MiB name: RG_ring_3_63144 00:22:37.009 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:22:37.009 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:22:37.009 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:22:37.009 associated memzone info: size: 0.000183 MiB name: MP_msgpool_63144 00:22:37.009 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:22:37.009 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_63144 00:22:37.009 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:22:37.009 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:22:37.009 07:31:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:22:37.009 07:31:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 63144 00:22:37.009 07:31:15 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 63144 ']' 00:22:37.009 07:31:15 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 63144 00:22:37.009 07:31:15 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:22:37.009 07:31:15 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:37.009 07:31:15 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63144 00:22:37.009 07:31:15 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:37.009 killing process with pid 63144 00:22:37.009 07:31:15 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:37.009 07:31:15 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63144' 00:22:37.009 07:31:15 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 63144 00:22:37.009 07:31:15 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 63144 00:22:39.528 00:22:39.528 real 0m4.321s 00:22:39.528 user 0m4.241s 00:22:39.528 sys 0m0.706s 00:22:39.528 ************************************ 00:22:39.528 END TEST dpdk_mem_utility 00:22:39.528 ************************************ 00:22:39.528 07:31:18 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:39.528 07:31:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:22:39.528 07:31:18 -- common/autotest_common.sh@1142 -- # return 0 00:22:39.528 07:31:18 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:22:39.528 07:31:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:39.528 07:31:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:39.529 07:31:18 -- common/autotest_common.sh@10 -- # set +x 00:22:39.529 ************************************ 00:22:39.529 START TEST event 00:22:39.529 ************************************ 00:22:39.529 07:31:18 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:22:39.785 * Looking for test storage... 00:22:39.785 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:22:39.785 07:31:18 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:22:39.785 07:31:18 event -- bdev/nbd_common.sh@6 -- # set -e 00:22:39.785 07:31:18 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:22:39.785 07:31:18 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:22:39.785 07:31:18 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:39.785 07:31:18 event -- common/autotest_common.sh@10 -- # set +x 00:22:39.785 ************************************ 00:22:39.785 START TEST event_perf 00:22:39.785 ************************************ 00:22:39.785 07:31:18 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:22:39.785 Running I/O for 1 seconds...[2024-07-15 07:31:18.225657] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:22:39.785 [2024-07-15 07:31:18.225863] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63250 ] 00:22:40.042 [2024-07-15 07:31:18.409306] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:40.298 [2024-07-15 07:31:18.752138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.298 [2024-07-15 07:31:18.752178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:40.298 [2024-07-15 07:31:18.752274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.298 [2024-07-15 07:31:18.752279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:41.670 Running I/O for 1 seconds... 00:22:41.670 lcore 0: 186507 00:22:41.670 lcore 1: 186510 00:22:41.670 lcore 2: 186503 00:22:41.670 lcore 3: 186505 00:22:41.670 done. 00:22:41.670 00:22:41.670 real 0m2.043s 00:22:41.670 user 0m4.750s 00:22:41.670 sys 0m0.167s 00:22:41.670 07:31:20 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:41.670 ************************************ 00:22:41.670 07:31:20 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:22:41.670 END TEST event_perf 00:22:41.670 ************************************ 00:22:41.670 07:31:20 event -- common/autotest_common.sh@1142 -- # return 0 00:22:41.670 07:31:20 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:22:41.670 07:31:20 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:41.670 07:31:20 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:41.670 07:31:20 event -- common/autotest_common.sh@10 -- # set +x 00:22:41.670 ************************************ 00:22:41.670 START TEST event_reactor 00:22:41.670 ************************************ 00:22:41.670 07:31:20 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:22:41.928 [2024-07-15 07:31:20.311820] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:22:41.928 [2024-07-15 07:31:20.311994] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63295 ] 00:22:41.928 [2024-07-15 07:31:20.479307] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.185 [2024-07-15 07:31:20.754230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.085 test_start 00:22:44.085 oneshot 00:22:44.085 tick 100 00:22:44.085 tick 100 00:22:44.085 tick 250 00:22:44.085 tick 100 00:22:44.085 tick 100 00:22:44.085 tick 100 00:22:44.085 tick 250 00:22:44.085 tick 500 00:22:44.085 tick 100 00:22:44.085 tick 100 00:22:44.085 tick 250 00:22:44.085 tick 100 00:22:44.085 tick 100 00:22:44.085 test_end 00:22:44.085 00:22:44.085 real 0m1.933s 00:22:44.085 user 0m1.694s 00:22:44.085 sys 0m0.128s 00:22:44.085 07:31:22 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:44.085 ************************************ 00:22:44.085 END TEST event_reactor 00:22:44.085 ************************************ 00:22:44.085 07:31:22 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:22:44.085 07:31:22 event -- common/autotest_common.sh@1142 -- # return 0 00:22:44.085 07:31:22 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:22:44.085 07:31:22 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:44.085 07:31:22 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:44.085 07:31:22 event -- common/autotest_common.sh@10 -- # set +x 00:22:44.085 ************************************ 00:22:44.085 START TEST event_reactor_perf 00:22:44.085 ************************************ 00:22:44.085 07:31:22 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:22:44.085 [2024-07-15 07:31:22.294521] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:22:44.085 [2024-07-15 07:31:22.294699] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63337 ] 00:22:44.085 [2024-07-15 07:31:22.464785] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.343 [2024-07-15 07:31:22.764268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.718 test_start 00:22:45.718 test_end 00:22:45.718 Performance: 281948 events per second 00:22:45.718 00:22:45.718 real 0m1.969s 00:22:45.718 user 0m1.715s 00:22:45.718 sys 0m0.142s 00:22:45.718 07:31:24 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:45.718 07:31:24 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:22:45.718 ************************************ 00:22:45.718 END TEST event_reactor_perf 00:22:45.718 ************************************ 00:22:45.718 07:31:24 event -- common/autotest_common.sh@1142 -- # return 0 00:22:45.718 07:31:24 event -- event/event.sh@49 -- # uname -s 00:22:45.718 07:31:24 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:22:45.718 07:31:24 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:22:45.719 07:31:24 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:45.719 07:31:24 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:45.719 07:31:24 event -- common/autotest_common.sh@10 -- # set +x 00:22:45.719 ************************************ 00:22:45.719 START TEST event_scheduler 00:22:45.719 ************************************ 00:22:45.719 07:31:24 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:22:45.977 * Looking for test storage... 00:22:45.977 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:22:45.977 07:31:24 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:22:45.977 07:31:24 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=63407 00:22:45.977 07:31:24 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:22:45.977 07:31:24 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 63407 00:22:45.977 07:31:24 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:22:45.977 07:31:24 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 63407 ']' 00:22:45.977 07:31:24 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.978 07:31:24 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:45.978 07:31:24 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.978 07:31:24 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:45.978 07:31:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:22:45.978 [2024-07-15 07:31:24.472365] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:22:45.978 [2024-07-15 07:31:24.473224] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63407 ] 00:22:46.235 [2024-07-15 07:31:24.660031] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:46.494 [2024-07-15 07:31:24.961338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.494 [2024-07-15 07:31:24.961521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.494 [2024-07-15 07:31:24.961652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:46.494 [2024-07-15 07:31:24.961761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.062 07:31:25 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:47.062 07:31:25 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:22:47.062 07:31:25 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:22:47.062 07:31:25 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.062 07:31:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:22:47.062 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:22:47.062 POWER: Cannot set governor of lcore 0 to userspace 00:22:47.062 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:22:47.062 POWER: Cannot set governor of lcore 0 to performance 00:22:47.062 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:22:47.062 POWER: Cannot set governor of lcore 0 to userspace 00:22:47.062 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:22:47.062 POWER: Cannot set governor of lcore 0 to userspace 00:22:47.062 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:22:47.062 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:22:47.062 POWER: Unable to set Power Management Environment for lcore 0 00:22:47.062 [2024-07-15 07:31:25.467779] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:22:47.062 [2024-07-15 07:31:25.467806] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:22:47.062 [2024-07-15 07:31:25.467823] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:22:47.062 [2024-07-15 07:31:25.467850] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:22:47.062 [2024-07-15 07:31:25.467867] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:22:47.062 [2024-07-15 07:31:25.467879] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:22:47.062 07:31:25 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.062 07:31:25 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:22:47.062 07:31:25 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.062 07:31:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:22:47.321 [2024-07-15 07:31:25.837097] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:22:47.321 07:31:25 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.321 07:31:25 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:22:47.321 07:31:25 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:47.321 07:31:25 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:47.321 07:31:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:22:47.321 ************************************ 00:22:47.321 START TEST scheduler_create_thread 00:22:47.321 ************************************ 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:22:47.321 2 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:22:47.321 3 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:22:47.321 4 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:22:47.321 5 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:22:47.321 6 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:22:47.321 7 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:22:47.321 8 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:22:47.321 9 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:22:47.321 10 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.321 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:22:47.579 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.579 07:31:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:22:47.579 07:31:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:22:47.579 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.579 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:22:47.580 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.580 07:31:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:22:47.580 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.580 07:31:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:22:48.951 07:31:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.951 07:31:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:22:48.951 07:31:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:22:48.951 07:31:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.951 07:31:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:22:49.887 07:31:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.887 ************************************ 00:22:49.888 END TEST scheduler_create_thread 00:22:49.888 ************************************ 00:22:49.888 00:22:49.888 real 0m2.619s 00:22:49.888 user 0m0.017s 00:22:49.888 sys 0m0.007s 00:22:49.888 07:31:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:49.888 07:31:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:22:50.146 07:31:28 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:22:50.146 07:31:28 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:22:50.146 07:31:28 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 63407 00:22:50.146 07:31:28 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 63407 ']' 00:22:50.146 07:31:28 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 63407 00:22:50.146 07:31:28 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:22:50.146 07:31:28 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:50.146 07:31:28 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63407 00:22:50.146 07:31:28 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:50.146 killing process with pid 63407 00:22:50.146 07:31:28 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:50.146 07:31:28 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63407' 00:22:50.146 07:31:28 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 63407 00:22:50.146 07:31:28 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 63407 00:22:50.404 [2024-07-15 07:31:28.948595] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:22:51.778 00:22:51.778 real 0m6.010s 00:22:51.778 user 0m9.928s 00:22:51.778 sys 0m0.586s 00:22:51.778 07:31:30 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:51.778 07:31:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:22:51.778 ************************************ 00:22:51.778 END TEST event_scheduler 00:22:51.778 ************************************ 00:22:51.778 07:31:30 event -- common/autotest_common.sh@1142 -- # return 0 00:22:51.778 07:31:30 event -- event/event.sh@51 -- # modprobe -n nbd 00:22:51.778 07:31:30 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:22:51.778 07:31:30 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:51.778 07:31:30 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:51.778 07:31:30 event -- common/autotest_common.sh@10 -- # set +x 00:22:51.778 ************************************ 00:22:51.778 START TEST app_repeat 00:22:51.778 ************************************ 00:22:51.778 07:31:30 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:22:51.778 07:31:30 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:51.778 07:31:30 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:51.778 07:31:30 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:22:51.778 07:31:30 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:22:51.778 07:31:30 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:22:51.778 07:31:30 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:22:51.778 07:31:30 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:22:51.778 07:31:30 event.app_repeat -- event/event.sh@19 -- # repeat_pid=63524 00:22:51.778 07:31:30 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:22:51.778 07:31:30 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:22:51.778 07:31:30 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 63524' 00:22:51.778 Process app_repeat pid: 63524 00:22:51.778 07:31:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:22:51.778 spdk_app_start Round 0 00:22:51.778 07:31:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:22:51.778 07:31:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63524 /var/tmp/spdk-nbd.sock 00:22:51.778 07:31:30 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63524 ']' 00:22:51.778 07:31:30 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:22:51.778 07:31:30 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:51.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:22:51.778 07:31:30 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:22:51.778 07:31:30 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:51.778 07:31:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:22:52.036 [2024-07-15 07:31:30.405246] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:22:52.036 [2024-07-15 07:31:30.405467] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63524 ] 00:22:52.036 [2024-07-15 07:31:30.584647] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:52.294 [2024-07-15 07:31:30.863339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.294 [2024-07-15 07:31:30.863346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.859 07:31:31 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:52.859 07:31:31 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:22:52.859 07:31:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:22:53.118 Malloc0 00:22:53.377 07:31:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:22:53.635 Malloc1 00:22:53.635 07:31:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:22:53.635 07:31:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:53.635 07:31:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:22:53.635 07:31:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:22:53.635 07:31:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:53.635 07:31:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:22:53.635 07:31:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:22:53.635 07:31:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:53.635 07:31:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:22:53.635 07:31:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:53.635 07:31:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:53.635 07:31:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:53.635 07:31:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:22:53.635 07:31:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:53.635 07:31:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:53.635 07:31:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:22:53.893 /dev/nbd0 00:22:53.893 07:31:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:53.893 07:31:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:53.893 07:31:32 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:53.893 07:31:32 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:22:53.893 07:31:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:53.893 07:31:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:53.893 07:31:32 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:53.893 07:31:32 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:22:53.893 07:31:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:53.893 07:31:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:53.893 07:31:32 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:22:53.893 1+0 records in 00:22:53.893 1+0 records out 00:22:53.893 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268612 s, 15.2 MB/s 00:22:53.893 07:31:32 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:22:53.893 07:31:32 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:22:53.893 07:31:32 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:22:53.893 07:31:32 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:53.893 07:31:32 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:22:53.893 07:31:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:53.893 07:31:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:53.893 07:31:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:22:54.151 /dev/nbd1 00:22:54.151 07:31:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:54.151 07:31:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:54.151 07:31:32 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:22:54.151 07:31:32 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:22:54.151 07:31:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:54.151 07:31:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:54.151 07:31:32 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:22:54.151 07:31:32 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:22:54.151 07:31:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:54.151 07:31:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:54.151 07:31:32 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:22:54.151 1+0 records in 00:22:54.151 1+0 records out 00:22:54.151 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028181 s, 14.5 MB/s 00:22:54.151 07:31:32 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:22:54.151 07:31:32 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:22:54.151 07:31:32 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:22:54.151 07:31:32 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:54.151 07:31:32 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:22:54.151 07:31:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:54.151 07:31:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:54.151 07:31:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:54.151 07:31:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:54.151 07:31:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:54.409 07:31:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:22:54.409 { 00:22:54.409 "nbd_device": "/dev/nbd0", 00:22:54.409 "bdev_name": "Malloc0" 00:22:54.409 }, 00:22:54.409 { 00:22:54.409 "nbd_device": "/dev/nbd1", 00:22:54.409 "bdev_name": "Malloc1" 00:22:54.409 } 00:22:54.409 ]' 00:22:54.409 07:31:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:22:54.409 { 00:22:54.409 "nbd_device": "/dev/nbd0", 00:22:54.409 "bdev_name": "Malloc0" 00:22:54.409 }, 00:22:54.409 { 00:22:54.409 "nbd_device": "/dev/nbd1", 00:22:54.409 "bdev_name": "Malloc1" 00:22:54.409 } 00:22:54.409 ]' 00:22:54.410 07:31:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:54.410 07:31:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:22:54.410 /dev/nbd1' 00:22:54.410 07:31:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:54.410 07:31:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:22:54.410 /dev/nbd1' 00:22:54.410 07:31:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:22:54.410 07:31:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:22:54.410 07:31:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:22:54.410 07:31:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:22:54.410 07:31:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:22:54.410 07:31:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:54.410 07:31:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:54.410 07:31:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:22:54.410 07:31:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:22:54.410 07:31:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:22:54.410 07:31:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:22:54.410 256+0 records in 00:22:54.410 256+0 records out 00:22:54.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00770178 s, 136 MB/s 00:22:54.410 07:31:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:54.410 07:31:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:22:54.410 256+0 records in 00:22:54.410 256+0 records out 00:22:54.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.031272 s, 33.5 MB/s 00:22:54.410 07:31:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:54.410 07:31:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:22:54.668 256+0 records in 00:22:54.668 256+0 records out 00:22:54.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0300062 s, 34.9 MB/s 00:22:54.668 07:31:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:22:54.668 07:31:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:54.668 07:31:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:54.668 07:31:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:22:54.668 07:31:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:22:54.668 07:31:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:22:54.668 07:31:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:22:54.668 07:31:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:54.668 07:31:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:22:54.668 07:31:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:54.668 07:31:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:22:54.668 07:31:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:22:54.668 07:31:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:22:54.668 07:31:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:54.668 07:31:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:54.668 07:31:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:54.668 07:31:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:22:54.668 07:31:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:54.668 07:31:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:54.925 07:31:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:54.925 07:31:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:54.925 07:31:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:54.925 07:31:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:54.925 07:31:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:54.925 07:31:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:54.925 07:31:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:22:54.925 07:31:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:22:54.925 07:31:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:54.925 07:31:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:22:55.182 07:31:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:55.182 07:31:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:55.182 07:31:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:55.183 07:31:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:55.183 07:31:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:55.183 07:31:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:55.183 07:31:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:22:55.183 07:31:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:22:55.183 07:31:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:55.183 07:31:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:55.183 07:31:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:55.440 07:31:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:55.440 07:31:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:55.440 07:31:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:55.440 07:31:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:55.440 07:31:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:22:55.440 07:31:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:55.440 07:31:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:22:55.440 07:31:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:22:55.440 07:31:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:22:55.440 07:31:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:22:55.440 07:31:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:22:55.440 07:31:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:22:55.440 07:31:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:22:56.006 07:31:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:22:57.379 [2024-07-15 07:31:35.760624] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:57.636 [2024-07-15 07:31:36.026278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.636 [2024-07-15 07:31:36.026288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.636 [2024-07-15 07:31:36.243510] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:22:57.636 [2024-07-15 07:31:36.243650] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:22:59.009 07:31:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:22:59.009 spdk_app_start Round 1 00:22:59.009 07:31:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:22:59.009 07:31:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63524 /var/tmp/spdk-nbd.sock 00:22:59.009 07:31:37 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63524 ']' 00:22:59.009 07:31:37 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:22:59.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:22:59.010 07:31:37 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:59.010 07:31:37 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:22:59.010 07:31:37 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:59.010 07:31:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:22:59.268 07:31:37 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:59.268 07:31:37 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:22:59.268 07:31:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:22:59.525 Malloc0 00:22:59.525 07:31:38 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:22:59.783 Malloc1 00:22:59.783 07:31:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:22:59.783 07:31:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:59.783 07:31:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:22:59.783 07:31:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:22:59.783 07:31:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:59.783 07:31:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:22:59.783 07:31:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:22:59.783 07:31:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:59.783 07:31:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:22:59.783 07:31:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:59.783 07:31:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:59.783 07:31:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:59.783 07:31:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:22:59.783 07:31:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:59.783 07:31:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:59.783 07:31:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:23:00.041 /dev/nbd0 00:23:00.041 07:31:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:00.042 07:31:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:00.042 07:31:38 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:23:00.042 07:31:38 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:23:00.042 07:31:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:00.042 07:31:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:00.042 07:31:38 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:23:00.042 07:31:38 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:23:00.042 07:31:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:00.042 07:31:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:00.042 07:31:38 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:23:00.042 1+0 records in 00:23:00.042 1+0 records out 00:23:00.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313219 s, 13.1 MB/s 00:23:00.042 07:31:38 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:00.042 07:31:38 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:23:00.042 07:31:38 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:00.042 07:31:38 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:00.042 07:31:38 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:23:00.042 07:31:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:00.042 07:31:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:00.042 07:31:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:23:00.299 /dev/nbd1 00:23:00.299 07:31:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:00.299 07:31:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:00.299 07:31:38 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:23:00.299 07:31:38 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:23:00.299 07:31:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:00.299 07:31:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:00.299 07:31:38 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:23:00.299 07:31:38 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:23:00.299 07:31:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:00.299 07:31:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:00.299 07:31:38 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:23:00.299 1+0 records in 00:23:00.299 1+0 records out 00:23:00.299 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000493242 s, 8.3 MB/s 00:23:00.299 07:31:38 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:00.557 07:31:38 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:23:00.557 07:31:38 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:00.557 07:31:38 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:00.557 07:31:38 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:23:00.557 07:31:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:00.557 07:31:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:00.557 07:31:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:00.557 07:31:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:00.557 07:31:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:00.557 07:31:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:23:00.557 { 00:23:00.557 "nbd_device": "/dev/nbd0", 00:23:00.557 "bdev_name": "Malloc0" 00:23:00.557 }, 00:23:00.557 { 00:23:00.557 "nbd_device": "/dev/nbd1", 00:23:00.557 "bdev_name": "Malloc1" 00:23:00.557 } 00:23:00.557 ]' 00:23:00.557 07:31:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:23:00.557 { 00:23:00.557 "nbd_device": "/dev/nbd0", 00:23:00.557 "bdev_name": "Malloc0" 00:23:00.557 }, 00:23:00.557 { 00:23:00.557 "nbd_device": "/dev/nbd1", 00:23:00.557 "bdev_name": "Malloc1" 00:23:00.557 } 00:23:00.557 ]' 00:23:00.557 07:31:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:23:00.815 /dev/nbd1' 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:23:00.815 /dev/nbd1' 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:23:00.815 256+0 records in 00:23:00.815 256+0 records out 00:23:00.815 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105498 s, 99.4 MB/s 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:23:00.815 256+0 records in 00:23:00.815 256+0 records out 00:23:00.815 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0284276 s, 36.9 MB/s 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:23:00.815 256+0 records in 00:23:00.815 256+0 records out 00:23:00.815 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0370324 s, 28.3 MB/s 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:00.815 07:31:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:01.073 07:31:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:01.073 07:31:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:01.073 07:31:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:01.073 07:31:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:01.073 07:31:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:01.073 07:31:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:01.073 07:31:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:23:01.073 07:31:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:23:01.073 07:31:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:01.073 07:31:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:23:01.331 07:31:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:01.331 07:31:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:01.331 07:31:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:01.331 07:31:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:01.331 07:31:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:01.331 07:31:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:01.331 07:31:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:23:01.331 07:31:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:23:01.331 07:31:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:01.331 07:31:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:01.331 07:31:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:01.589 07:31:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:01.589 07:31:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:01.589 07:31:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:01.889 07:31:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:01.889 07:31:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:23:01.889 07:31:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:01.889 07:31:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:23:01.889 07:31:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:23:01.889 07:31:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:23:01.889 07:31:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:23:01.889 07:31:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:23:01.889 07:31:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:23:01.889 07:31:40 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:23:02.147 07:31:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:23:03.521 [2024-07-15 07:31:42.053262] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:03.778 [2024-07-15 07:31:42.318160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.778 [2024-07-15 07:31:42.318160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.035 [2024-07-15 07:31:42.534159] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:23:04.035 [2024-07-15 07:31:42.534236] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:23:05.406 spdk_app_start Round 2 00:23:05.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:23:05.406 07:31:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:23:05.406 07:31:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:23:05.406 07:31:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63524 /var/tmp/spdk-nbd.sock 00:23:05.406 07:31:43 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63524 ']' 00:23:05.406 07:31:43 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:23:05.406 07:31:43 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:05.406 07:31:43 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:23:05.406 07:31:43 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:05.406 07:31:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:23:05.663 07:31:44 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:05.663 07:31:44 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:23:05.663 07:31:44 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:23:05.920 Malloc0 00:23:05.920 07:31:44 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:23:06.176 Malloc1 00:23:06.176 07:31:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:23:06.176 07:31:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:06.176 07:31:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:06.176 07:31:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:23:06.176 07:31:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:06.176 07:31:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:23:06.176 07:31:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:23:06.176 07:31:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:06.176 07:31:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:06.176 07:31:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:06.176 07:31:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:06.176 07:31:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:06.176 07:31:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:23:06.176 07:31:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:06.176 07:31:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:06.176 07:31:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:23:06.432 /dev/nbd0 00:23:06.432 07:31:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:06.432 07:31:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:06.432 07:31:44 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:23:06.432 07:31:44 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:23:06.432 07:31:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:06.432 07:31:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:06.432 07:31:44 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:23:06.432 07:31:44 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:23:06.432 07:31:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:06.432 07:31:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:06.432 07:31:44 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:23:06.432 1+0 records in 00:23:06.432 1+0 records out 00:23:06.432 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386487 s, 10.6 MB/s 00:23:06.432 07:31:44 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:06.432 07:31:44 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:23:06.432 07:31:44 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:06.432 07:31:44 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:06.432 07:31:44 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:23:06.432 07:31:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:06.432 07:31:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:06.432 07:31:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:23:06.688 /dev/nbd1 00:23:06.688 07:31:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:06.688 07:31:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:06.688 07:31:45 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:23:06.688 07:31:45 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:23:06.688 07:31:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:06.688 07:31:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:06.688 07:31:45 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:23:06.688 07:31:45 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:23:06.688 07:31:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:06.688 07:31:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:06.688 07:31:45 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:23:06.688 1+0 records in 00:23:06.688 1+0 records out 00:23:06.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040092 s, 10.2 MB/s 00:23:06.688 07:31:45 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:06.688 07:31:45 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:23:06.688 07:31:45 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:06.688 07:31:45 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:06.688 07:31:45 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:23:06.688 07:31:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:06.688 07:31:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:06.688 07:31:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:06.688 07:31:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:06.688 07:31:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:06.944 07:31:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:23:06.945 { 00:23:06.945 "nbd_device": "/dev/nbd0", 00:23:06.945 "bdev_name": "Malloc0" 00:23:06.945 }, 00:23:06.945 { 00:23:06.945 "nbd_device": "/dev/nbd1", 00:23:06.945 "bdev_name": "Malloc1" 00:23:06.945 } 00:23:06.945 ]' 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:23:06.945 { 00:23:06.945 "nbd_device": "/dev/nbd0", 00:23:06.945 "bdev_name": "Malloc0" 00:23:06.945 }, 00:23:06.945 { 00:23:06.945 "nbd_device": "/dev/nbd1", 00:23:06.945 "bdev_name": "Malloc1" 00:23:06.945 } 00:23:06.945 ]' 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:23:06.945 /dev/nbd1' 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:23:06.945 /dev/nbd1' 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:23:06.945 256+0 records in 00:23:06.945 256+0 records out 00:23:06.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00491333 s, 213 MB/s 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:23:06.945 256+0 records in 00:23:06.945 256+0 records out 00:23:06.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0266545 s, 39.3 MB/s 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:23:06.945 256+0 records in 00:23:06.945 256+0 records out 00:23:06.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0381911 s, 27.5 MB/s 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:06.945 07:31:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:07.510 07:31:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:07.510 07:31:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:07.510 07:31:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:07.510 07:31:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:07.510 07:31:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:07.510 07:31:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:07.510 07:31:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:23:07.510 07:31:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:23:07.510 07:31:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:07.510 07:31:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:23:07.510 07:31:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:07.510 07:31:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:07.510 07:31:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:07.510 07:31:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:07.510 07:31:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:07.510 07:31:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:07.510 07:31:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:23:07.510 07:31:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:23:07.510 07:31:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:07.510 07:31:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:07.510 07:31:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:08.075 07:31:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:08.075 07:31:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:08.075 07:31:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:08.075 07:31:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:08.075 07:31:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:08.075 07:31:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:23:08.075 07:31:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:23:08.075 07:31:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:23:08.075 07:31:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:23:08.075 07:31:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:23:08.075 07:31:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:23:08.075 07:31:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:23:08.075 07:31:46 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:23:08.333 07:31:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:23:09.706 [2024-07-15 07:31:48.221045] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:09.963 [2024-07-15 07:31:48.486067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.963 [2024-07-15 07:31:48.486075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.221 [2024-07-15 07:31:48.701748] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:23:10.221 [2024-07-15 07:31:48.701826] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:23:11.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:23:11.594 07:31:49 event.app_repeat -- event/event.sh@38 -- # waitforlisten 63524 /var/tmp/spdk-nbd.sock 00:23:11.594 07:31:49 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63524 ']' 00:23:11.594 07:31:49 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:23:11.594 07:31:49 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:11.594 07:31:49 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:23:11.594 07:31:49 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:11.594 07:31:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:23:11.594 07:31:50 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:11.594 07:31:50 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:23:11.594 07:31:50 event.app_repeat -- event/event.sh@39 -- # killprocess 63524 00:23:11.594 07:31:50 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 63524 ']' 00:23:11.594 07:31:50 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 63524 00:23:11.594 07:31:50 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:23:11.594 07:31:50 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:11.594 07:31:50 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63524 00:23:11.594 killing process with pid 63524 00:23:11.594 07:31:50 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:11.594 07:31:50 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:11.594 07:31:50 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63524' 00:23:11.594 07:31:50 event.app_repeat -- common/autotest_common.sh@967 -- # kill 63524 00:23:11.594 07:31:50 event.app_repeat -- common/autotest_common.sh@972 -- # wait 63524 00:23:12.993 spdk_app_start is called in Round 0. 00:23:12.993 Shutdown signal received, stop current app iteration 00:23:12.993 Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 reinitialization... 00:23:12.993 spdk_app_start is called in Round 1. 00:23:12.993 Shutdown signal received, stop current app iteration 00:23:12.993 Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 reinitialization... 00:23:12.993 spdk_app_start is called in Round 2. 00:23:12.993 Shutdown signal received, stop current app iteration 00:23:12.993 Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 reinitialization... 00:23:12.993 spdk_app_start is called in Round 3. 00:23:12.993 Shutdown signal received, stop current app iteration 00:23:12.993 07:31:51 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:23:12.993 07:31:51 event.app_repeat -- event/event.sh@42 -- # return 0 00:23:12.993 00:23:12.993 real 0m21.064s 00:23:12.993 user 0m44.309s 00:23:12.993 sys 0m3.231s 00:23:12.993 ************************************ 00:23:12.993 END TEST app_repeat 00:23:12.993 ************************************ 00:23:12.993 07:31:51 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:12.993 07:31:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:23:12.993 07:31:51 event -- common/autotest_common.sh@1142 -- # return 0 00:23:12.993 07:31:51 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:23:12.993 07:31:51 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:23:12.993 07:31:51 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:12.993 07:31:51 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:12.993 07:31:51 event -- common/autotest_common.sh@10 -- # set +x 00:23:12.993 ************************************ 00:23:12.993 START TEST cpu_locks 00:23:12.993 ************************************ 00:23:12.993 07:31:51 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:23:12.993 * Looking for test storage... 00:23:12.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:23:12.993 07:31:51 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:23:12.993 07:31:51 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:23:12.993 07:31:51 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:23:12.993 07:31:51 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:23:12.993 07:31:51 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:12.993 07:31:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:12.993 07:31:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:23:12.993 ************************************ 00:23:12.993 START TEST default_locks 00:23:12.994 ************************************ 00:23:12.994 07:31:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:23:12.994 07:31:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=63978 00:23:12.994 07:31:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 63978 00:23:12.994 07:31:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:12.994 07:31:51 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 63978 ']' 00:23:12.994 07:31:51 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.994 07:31:51 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:12.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.994 07:31:51 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.994 07:31:51 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:12.994 07:31:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:23:13.252 [2024-07-15 07:31:51.689316] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:23:13.252 [2024-07-15 07:31:51.689539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63978 ] 00:23:13.510 [2024-07-15 07:31:51.869187] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.768 [2024-07-15 07:31:52.146102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.701 07:31:53 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:14.701 07:31:53 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:23:14.701 07:31:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 63978 00:23:14.701 07:31:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:23:14.701 07:31:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 63978 00:23:14.959 07:31:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 63978 00:23:14.959 07:31:53 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 63978 ']' 00:23:14.959 07:31:53 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 63978 00:23:14.959 07:31:53 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:23:14.959 07:31:53 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:14.959 07:31:53 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63978 00:23:14.959 07:31:53 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:14.959 07:31:53 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:14.959 killing process with pid 63978 00:23:14.959 07:31:53 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63978' 00:23:14.959 07:31:53 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 63978 00:23:14.959 07:31:53 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 63978 00:23:17.485 07:31:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 63978 00:23:17.485 07:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:23:17.485 07:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63978 00:23:17.485 07:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:23:17.485 07:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:17.485 07:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:23:17.485 07:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:17.485 07:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 63978 00:23:17.485 07:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 63978 ']' 00:23:17.485 07:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.485 07:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:17.485 07:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.485 07:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:17.485 07:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:23:17.485 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63978) - No such process 00:23:17.485 ERROR: process (pid: 63978) is no longer running 00:23:17.486 07:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:17.486 07:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:23:17.486 07:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:23:17.486 07:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:17.486 07:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:17.486 07:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:17.486 07:31:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:23:17.486 07:31:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:23:17.486 07:31:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:23:17.486 07:31:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:23:17.486 00:23:17.486 real 0m4.401s 00:23:17.486 user 0m4.240s 00:23:17.486 sys 0m0.807s 00:23:17.486 07:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:17.486 ************************************ 00:23:17.486 END TEST default_locks 00:23:17.486 ************************************ 00:23:17.486 07:31:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:23:17.486 07:31:55 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:23:17.486 07:31:55 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:23:17.486 07:31:55 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:17.486 07:31:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:17.486 07:31:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:23:17.486 ************************************ 00:23:17.486 START TEST default_locks_via_rpc 00:23:17.486 ************************************ 00:23:17.486 07:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:23:17.486 07:31:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=64059 00:23:17.486 07:31:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:17.486 07:31:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 64059 00:23:17.486 07:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64059 ']' 00:23:17.486 07:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.486 07:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:17.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.486 07:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.486 07:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:17.486 07:31:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:17.745 [2024-07-15 07:31:56.127798] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:23:17.745 [2024-07-15 07:31:56.127984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64059 ] 00:23:17.745 [2024-07-15 07:31:56.295752] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.002 [2024-07-15 07:31:56.574989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.937 07:31:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:18.937 07:31:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:23:18.937 07:31:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:23:18.937 07:31:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.937 07:31:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:18.937 07:31:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.937 07:31:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:23:18.937 07:31:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:23:18.937 07:31:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:23:18.937 07:31:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:23:18.937 07:31:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:23:18.937 07:31:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.937 07:31:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:18.937 07:31:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.937 07:31:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 64059 00:23:18.937 07:31:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 64059 00:23:18.937 07:31:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:23:19.503 07:31:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 64059 00:23:19.503 07:31:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 64059 ']' 00:23:19.503 07:31:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 64059 00:23:19.503 07:31:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:23:19.503 07:31:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:19.503 07:31:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64059 00:23:19.503 07:31:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:19.503 07:31:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:19.503 killing process with pid 64059 00:23:19.503 07:31:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64059' 00:23:19.503 07:31:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 64059 00:23:19.503 07:31:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 64059 00:23:22.032 00:23:22.032 real 0m4.489s 00:23:22.032 user 0m4.335s 00:23:22.032 sys 0m0.875s 00:23:22.032 07:32:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:22.032 07:32:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:22.032 ************************************ 00:23:22.032 END TEST default_locks_via_rpc 00:23:22.032 ************************************ 00:23:22.032 07:32:00 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:23:22.032 07:32:00 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:23:22.032 07:32:00 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:22.032 07:32:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:22.032 07:32:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:23:22.032 ************************************ 00:23:22.032 START TEST non_locking_app_on_locked_coremask 00:23:22.032 ************************************ 00:23:22.032 07:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:23:22.032 07:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=64138 00:23:22.032 07:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:22.032 07:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 64138 /var/tmp/spdk.sock 00:23:22.032 07:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64138 ']' 00:23:22.032 07:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.032 07:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:22.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.032 07:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.032 07:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:22.032 07:32:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:23:22.291 [2024-07-15 07:32:00.692879] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:23:22.291 [2024-07-15 07:32:00.693091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64138 ] 00:23:22.291 [2024-07-15 07:32:00.881436] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.547 [2024-07-15 07:32:01.159943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:23:23.479 07:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:23.479 07:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:23:23.479 07:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=64160 00:23:23.479 07:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:23:23.479 07:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 64160 /var/tmp/spdk2.sock 00:23:23.479 07:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64160 ']' 00:23:23.479 07:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:23:23.479 07:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:23.479 07:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:23:23.479 07:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:23.479 07:32:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:23:23.736 [2024-07-15 07:32:02.211464] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:23:23.736 [2024-07-15 07:32:02.212254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64160 ] 00:23:23.994 [2024-07-15 07:32:02.405074] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:23:23.994 [2024-07-15 07:32:02.405182] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.560 [2024-07-15 07:32:02.956050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.457 07:32:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:26.457 07:32:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:23:26.457 07:32:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 64138 00:23:26.457 07:32:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64138 00:23:26.457 07:32:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:23:27.392 07:32:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 64138 00:23:27.392 07:32:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64138 ']' 00:23:27.392 07:32:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 64138 00:23:27.392 07:32:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:23:27.392 07:32:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:27.392 07:32:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64138 00:23:27.392 07:32:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:27.392 killing process with pid 64138 00:23:27.392 07:32:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:27.392 07:32:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64138' 00:23:27.392 07:32:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 64138 00:23:27.392 07:32:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 64138 00:23:32.708 07:32:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 64160 00:23:32.708 07:32:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64160 ']' 00:23:32.708 07:32:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 64160 00:23:32.708 07:32:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:23:32.708 07:32:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:32.708 07:32:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64160 00:23:32.708 07:32:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:32.708 07:32:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:32.708 07:32:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64160' 00:23:32.708 killing process with pid 64160 00:23:32.708 07:32:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 64160 00:23:32.708 07:32:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 64160 00:23:35.279 00:23:35.279 real 0m12.701s 00:23:35.279 user 0m12.965s 00:23:35.279 sys 0m1.753s 00:23:35.279 07:32:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:35.279 07:32:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:23:35.279 ************************************ 00:23:35.279 END TEST non_locking_app_on_locked_coremask 00:23:35.279 ************************************ 00:23:35.279 07:32:13 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:23:35.279 07:32:13 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:23:35.279 07:32:13 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:35.279 07:32:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:35.279 07:32:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:23:35.279 ************************************ 00:23:35.279 START TEST locking_app_on_unlocked_coremask 00:23:35.279 ************************************ 00:23:35.279 07:32:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:23:35.280 07:32:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=64319 00:23:35.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.280 07:32:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 64319 /var/tmp/spdk.sock 00:23:35.280 07:32:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:23:35.280 07:32:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64319 ']' 00:23:35.280 07:32:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.280 07:32:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:35.280 07:32:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.280 07:32:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:35.280 07:32:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:23:35.280 [2024-07-15 07:32:13.420160] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:23:35.280 [2024-07-15 07:32:13.420594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64319 ] 00:23:35.280 [2024-07-15 07:32:13.590532] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:23:35.280 [2024-07-15 07:32:13.590872] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.537 [2024-07-15 07:32:13.897817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.472 07:32:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:36.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:23:36.472 07:32:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:23:36.472 07:32:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=64335 00:23:36.472 07:32:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:23:36.472 07:32:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 64335 /var/tmp/spdk2.sock 00:23:36.472 07:32:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64335 ']' 00:23:36.472 07:32:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:23:36.472 07:32:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:36.472 07:32:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:23:36.472 07:32:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:36.472 07:32:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:23:36.472 [2024-07-15 07:32:14.912907] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:23:36.472 [2024-07-15 07:32:14.913338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64335 ] 00:23:36.730 [2024-07-15 07:32:15.092874] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.296 [2024-07-15 07:32:15.639781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.218 07:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:39.218 07:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:23:39.218 07:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 64335 00:23:39.218 07:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64335 00:23:39.218 07:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:23:40.152 07:32:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 64319 00:23:40.152 07:32:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64319 ']' 00:23:40.152 07:32:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 64319 00:23:40.152 07:32:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:23:40.152 07:32:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:40.152 07:32:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64319 00:23:40.152 07:32:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:40.152 07:32:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:40.152 07:32:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64319' 00:23:40.152 killing process with pid 64319 00:23:40.152 07:32:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 64319 00:23:40.152 07:32:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 64319 00:23:45.412 07:32:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 64335 00:23:45.412 07:32:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64335 ']' 00:23:45.413 07:32:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 64335 00:23:45.413 07:32:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:23:45.413 07:32:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:45.413 07:32:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64335 00:23:45.413 killing process with pid 64335 00:23:45.413 07:32:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:45.413 07:32:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:45.413 07:32:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64335' 00:23:45.413 07:32:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 64335 00:23:45.413 07:32:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 64335 00:23:47.942 ************************************ 00:23:47.942 END TEST locking_app_on_unlocked_coremask 00:23:47.943 ************************************ 00:23:47.943 00:23:47.943 real 0m12.670s 00:23:47.943 user 0m12.890s 00:23:47.943 sys 0m1.720s 00:23:47.943 07:32:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:47.943 07:32:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:23:47.943 07:32:26 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:23:47.943 07:32:26 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:23:47.943 07:32:26 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:47.943 07:32:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:47.943 07:32:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:23:47.943 ************************************ 00:23:47.943 START TEST locking_app_on_locked_coremask 00:23:47.943 ************************************ 00:23:47.943 07:32:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:23:47.943 07:32:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=64494 00:23:47.943 07:32:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 64494 /var/tmp/spdk.sock 00:23:47.943 07:32:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64494 ']' 00:23:47.943 07:32:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:47.943 07:32:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:47.943 07:32:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:47.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:47.943 07:32:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:47.943 07:32:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:47.943 07:32:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:23:47.943 [2024-07-15 07:32:26.158386] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:23:47.943 [2024-07-15 07:32:26.158671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64494 ] 00:23:47.943 [2024-07-15 07:32:26.331839] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.200 [2024-07-15 07:32:26.624975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.133 07:32:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:49.133 07:32:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:23:49.133 07:32:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=64516 00:23:49.133 07:32:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:23:49.133 07:32:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 64516 /var/tmp/spdk2.sock 00:23:49.133 07:32:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:23:49.133 07:32:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 64516 /var/tmp/spdk2.sock 00:23:49.133 07:32:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:23:49.133 07:32:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:49.133 07:32:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:23:49.133 07:32:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:49.133 07:32:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 64516 /var/tmp/spdk2.sock 00:23:49.133 07:32:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64516 ']' 00:23:49.133 07:32:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:23:49.133 07:32:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:49.133 07:32:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:23:49.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:23:49.133 07:32:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:49.133 07:32:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:23:49.392 [2024-07-15 07:32:27.772141] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:23:49.392 [2024-07-15 07:32:27.772639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64516 ] 00:23:49.392 [2024-07-15 07:32:27.961315] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 64494 has claimed it. 00:23:49.392 [2024-07-15 07:32:27.961440] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:23:49.958 ERROR: process (pid: 64516) is no longer running 00:23:49.958 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (64516) - No such process 00:23:49.958 07:32:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:49.958 07:32:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:23:49.958 07:32:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:23:49.958 07:32:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:49.958 07:32:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:49.958 07:32:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:49.958 07:32:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 64494 00:23:49.958 07:32:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64494 00:23:49.958 07:32:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:23:50.215 07:32:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 64494 00:23:50.215 07:32:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64494 ']' 00:23:50.215 07:32:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 64494 00:23:50.215 07:32:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:23:50.215 07:32:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:50.215 07:32:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64494 00:23:50.215 killing process with pid 64494 00:23:50.215 07:32:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:50.215 07:32:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:50.215 07:32:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64494' 00:23:50.215 07:32:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 64494 00:23:50.215 07:32:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 64494 00:23:52.744 00:23:52.744 real 0m5.205s 00:23:52.744 user 0m5.264s 00:23:52.744 sys 0m0.989s 00:23:52.744 07:32:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:52.744 07:32:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:23:52.744 ************************************ 00:23:52.744 END TEST locking_app_on_locked_coremask 00:23:52.744 ************************************ 00:23:52.744 07:32:31 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:23:52.744 07:32:31 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:23:52.744 07:32:31 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:52.744 07:32:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:52.744 07:32:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:23:52.744 ************************************ 00:23:52.744 START TEST locking_overlapped_coremask 00:23:52.744 ************************************ 00:23:52.744 07:32:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:23:52.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.744 07:32:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=64585 00:23:52.744 07:32:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 64585 /var/tmp/spdk.sock 00:23:52.744 07:32:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 64585 ']' 00:23:52.744 07:32:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:23:52.744 07:32:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.744 07:32:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:52.744 07:32:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.744 07:32:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:52.744 07:32:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:23:53.003 [2024-07-15 07:32:31.407485] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:23:53.003 [2024-07-15 07:32:31.407669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64585 ] 00:23:53.003 [2024-07-15 07:32:31.582066] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:53.569 [2024-07-15 07:32:31.899078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.569 [2024-07-15 07:32:31.899180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.569 [2024-07-15 07:32:31.899196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:54.503 07:32:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:54.503 07:32:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:23:54.503 07:32:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=64609 00:23:54.503 07:32:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:23:54.503 07:32:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 64609 /var/tmp/spdk2.sock 00:23:54.503 07:32:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:23:54.503 07:32:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 64609 /var/tmp/spdk2.sock 00:23:54.503 07:32:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:23:54.503 07:32:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:54.503 07:32:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:23:54.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:23:54.503 07:32:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:54.503 07:32:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 64609 /var/tmp/spdk2.sock 00:23:54.503 07:32:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 64609 ']' 00:23:54.503 07:32:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:23:54.503 07:32:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:54.503 07:32:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:23:54.503 07:32:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:54.503 07:32:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:23:54.503 [2024-07-15 07:32:32.982030] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:23:54.503 [2024-07-15 07:32:32.982264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64609 ] 00:23:54.762 [2024-07-15 07:32:33.173760] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64585 has claimed it. 00:23:54.762 [2024-07-15 07:32:33.173859] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:23:55.019 ERROR: process (pid: 64609) is no longer running 00:23:55.019 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (64609) - No such process 00:23:55.019 07:32:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:55.019 07:32:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:23:55.019 07:32:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:23:55.019 07:32:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:55.019 07:32:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:55.019 07:32:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:55.019 07:32:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:23:55.019 07:32:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:23:55.019 07:32:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:23:55.019 07:32:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:23:55.019 07:32:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 64585 00:23:55.019 07:32:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 64585 ']' 00:23:55.019 07:32:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 64585 00:23:55.019 07:32:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:23:55.019 07:32:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:55.019 07:32:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64585 00:23:55.277 07:32:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:55.277 07:32:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:55.277 07:32:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64585' 00:23:55.277 killing process with pid 64585 00:23:55.277 07:32:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 64585 00:23:55.277 07:32:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 64585 00:23:57.806 00:23:57.806 real 0m4.832s 00:23:57.806 user 0m12.383s 00:23:57.806 sys 0m0.830s 00:23:57.806 07:32:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:57.806 07:32:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:23:57.806 ************************************ 00:23:57.806 END TEST locking_overlapped_coremask 00:23:57.806 ************************************ 00:23:57.806 07:32:36 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:23:57.806 07:32:36 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:23:57.806 07:32:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:57.806 07:32:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:57.806 07:32:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:23:57.806 ************************************ 00:23:57.806 START TEST locking_overlapped_coremask_via_rpc 00:23:57.806 ************************************ 00:23:57.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.806 07:32:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:23:57.806 07:32:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=64673 00:23:57.806 07:32:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 64673 /var/tmp/spdk.sock 00:23:57.806 07:32:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64673 ']' 00:23:57.806 07:32:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:23:57.806 07:32:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.806 07:32:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:57.806 07:32:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.806 07:32:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:57.806 07:32:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:57.806 [2024-07-15 07:32:36.279809] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:23:57.806 [2024-07-15 07:32:36.279998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64673 ] 00:23:58.062 [2024-07-15 07:32:36.447901] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:23:58.062 [2024-07-15 07:32:36.447981] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:58.319 [2024-07-15 07:32:36.750607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.319 [2024-07-15 07:32:36.750737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:58.319 [2024-07-15 07:32:36.750741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:23:59.247 07:32:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:59.247 07:32:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:23:59.247 07:32:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=64702 00:23:59.247 07:32:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:23:59.247 07:32:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 64702 /var/tmp/spdk2.sock 00:23:59.247 07:32:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64702 ']' 00:23:59.247 07:32:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:23:59.247 07:32:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:59.247 07:32:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:23:59.247 07:32:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:59.247 07:32:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:59.247 [2024-07-15 07:32:37.778136] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:23:59.247 [2024-07-15 07:32:37.778343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64702 ] 00:23:59.506 [2024-07-15 07:32:37.957967] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:23:59.506 [2024-07-15 07:32:37.958074] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:00.071 [2024-07-15 07:32:38.534784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:00.071 [2024-07-15 07:32:38.538571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.071 [2024-07-15 07:32:38.538596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:01.989 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:01.989 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:24:01.989 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:24:01.989 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.989 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:01.989 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.989 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:24:01.989 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:24:01.989 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:24:01.989 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:01.989 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:01.989 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:01.989 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:01.989 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:24:01.989 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.989 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:02.247 [2024-07-15 07:32:40.606786] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64673 has claimed it. 00:24:02.247 request: 00:24:02.247 { 00:24:02.247 "method": "framework_enable_cpumask_locks", 00:24:02.247 "req_id": 1 00:24:02.247 } 00:24:02.247 Got JSON-RPC error response 00:24:02.247 response: 00:24:02.247 { 00:24:02.247 "code": -32603, 00:24:02.247 "message": "Failed to claim CPU core: 2" 00:24:02.247 } 00:24:02.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.247 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:02.247 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:24:02.247 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:02.247 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:02.247 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:02.247 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 64673 /var/tmp/spdk.sock 00:24:02.247 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64673 ']' 00:24:02.247 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.247 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:02.247 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.247 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:02.247 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:02.505 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:02.505 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:24:02.505 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 64702 /var/tmp/spdk2.sock 00:24:02.505 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64702 ']' 00:24:02.505 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:24:02.505 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:02.505 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:24:02.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:24:02.505 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:02.505 07:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:02.762 07:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:02.762 07:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:24:02.762 07:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:24:02.762 07:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:24:02.762 ************************************ 00:24:02.762 END TEST locking_overlapped_coremask_via_rpc 00:24:02.762 ************************************ 00:24:02.762 07:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:24:02.762 07:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:24:02.763 00:24:02.763 real 0m5.093s 00:24:02.763 user 0m1.787s 00:24:02.763 sys 0m0.270s 00:24:02.763 07:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:02.763 07:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:02.763 07:32:41 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:24:02.763 07:32:41 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:24:02.763 07:32:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64673 ]] 00:24:02.763 07:32:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64673 00:24:02.763 07:32:41 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64673 ']' 00:24:02.763 07:32:41 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64673 00:24:02.763 07:32:41 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:24:02.763 07:32:41 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:02.763 07:32:41 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64673 00:24:02.763 killing process with pid 64673 00:24:02.763 07:32:41 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:02.763 07:32:41 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:02.763 07:32:41 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64673' 00:24:02.763 07:32:41 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 64673 00:24:02.763 07:32:41 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 64673 00:24:05.290 07:32:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64702 ]] 00:24:05.290 07:32:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64702 00:24:05.290 07:32:43 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64702 ']' 00:24:05.290 07:32:43 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64702 00:24:05.290 07:32:43 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:24:05.290 07:32:43 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:05.290 07:32:43 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64702 00:24:05.290 killing process with pid 64702 00:24:05.290 07:32:43 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:05.290 07:32:43 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:05.290 07:32:43 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64702' 00:24:05.290 07:32:43 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 64702 00:24:05.290 07:32:43 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 64702 00:24:07.830 07:32:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:24:07.830 07:32:46 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:24:07.830 07:32:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64673 ]] 00:24:07.830 07:32:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64673 00:24:07.830 07:32:46 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64673 ']' 00:24:07.830 07:32:46 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64673 00:24:07.830 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (64673) - No such process 00:24:07.830 Process with pid 64673 is not found 00:24:07.830 Process with pid 64702 is not found 00:24:07.830 07:32:46 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 64673 is not found' 00:24:07.830 07:32:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64702 ]] 00:24:07.830 07:32:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64702 00:24:07.830 07:32:46 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64702 ']' 00:24:07.830 07:32:46 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64702 00:24:07.830 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (64702) - No such process 00:24:07.830 07:32:46 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 64702 is not found' 00:24:07.830 07:32:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:24:07.830 00:24:07.830 real 0m54.909s 00:24:07.830 user 1m31.181s 00:24:07.830 sys 0m8.615s 00:24:07.830 07:32:46 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:07.830 ************************************ 00:24:07.830 END TEST cpu_locks 00:24:07.830 ************************************ 00:24:07.830 07:32:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:24:07.830 07:32:46 event -- common/autotest_common.sh@1142 -- # return 0 00:24:07.830 ************************************ 00:24:07.830 END TEST event 00:24:07.830 ************************************ 00:24:07.830 00:24:07.830 real 1m28.332s 00:24:07.830 user 2m33.711s 00:24:07.830 sys 0m13.113s 00:24:07.830 07:32:46 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:07.830 07:32:46 event -- common/autotest_common.sh@10 -- # set +x 00:24:08.089 07:32:46 -- common/autotest_common.sh@1142 -- # return 0 00:24:08.089 07:32:46 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:24:08.089 07:32:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:08.089 07:32:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:08.089 07:32:46 -- common/autotest_common.sh@10 -- # set +x 00:24:08.089 ************************************ 00:24:08.089 START TEST thread 00:24:08.089 ************************************ 00:24:08.089 07:32:46 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:24:08.089 * Looking for test storage... 00:24:08.089 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:24:08.089 07:32:46 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:24:08.089 07:32:46 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:24:08.089 07:32:46 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:08.089 07:32:46 thread -- common/autotest_common.sh@10 -- # set +x 00:24:08.089 ************************************ 00:24:08.089 START TEST thread_poller_perf 00:24:08.089 ************************************ 00:24:08.089 07:32:46 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:24:08.089 [2024-07-15 07:32:46.605714] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:24:08.089 [2024-07-15 07:32:46.605899] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64889 ] 00:24:08.348 [2024-07-15 07:32:46.779065] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.606 [2024-07-15 07:32:47.096982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.606 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:24:10.048 ====================================== 00:24:10.048 busy:2213774454 (cyc) 00:24:10.048 total_run_count: 306000 00:24:10.048 tsc_hz: 2200000000 (cyc) 00:24:10.048 ====================================== 00:24:10.048 poller_cost: 7234 (cyc), 3288 (nsec) 00:24:10.048 00:24:10.048 real 0m2.037s 00:24:10.048 user 0m1.781s 00:24:10.048 sys 0m0.143s 00:24:10.048 07:32:48 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:10.048 07:32:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:24:10.048 ************************************ 00:24:10.048 END TEST thread_poller_perf 00:24:10.048 ************************************ 00:24:10.048 07:32:48 thread -- common/autotest_common.sh@1142 -- # return 0 00:24:10.048 07:32:48 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:24:10.048 07:32:48 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:24:10.048 07:32:48 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:10.048 07:32:48 thread -- common/autotest_common.sh@10 -- # set +x 00:24:10.306 ************************************ 00:24:10.306 START TEST thread_poller_perf 00:24:10.306 ************************************ 00:24:10.306 07:32:48 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:24:10.306 [2024-07-15 07:32:48.712286] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:24:10.306 [2024-07-15 07:32:48.712513] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64931 ] 00:24:10.306 [2024-07-15 07:32:48.889427] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.565 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:24:10.565 [2024-07-15 07:32:49.167056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.464 ====================================== 00:24:12.464 busy:2204829968 (cyc) 00:24:12.464 total_run_count: 3845000 00:24:12.464 tsc_hz: 2200000000 (cyc) 00:24:12.464 ====================================== 00:24:12.464 poller_cost: 573 (cyc), 260 (nsec) 00:24:12.464 00:24:12.464 real 0m1.954s 00:24:12.464 user 0m1.706s 00:24:12.464 sys 0m0.138s 00:24:12.464 07:32:50 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:12.464 ************************************ 00:24:12.464 END TEST thread_poller_perf 00:24:12.464 ************************************ 00:24:12.464 07:32:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:24:12.464 07:32:50 thread -- common/autotest_common.sh@1142 -- # return 0 00:24:12.464 07:32:50 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:24:12.464 ************************************ 00:24:12.464 END TEST thread 00:24:12.464 ************************************ 00:24:12.465 00:24:12.465 real 0m4.196s 00:24:12.465 user 0m3.557s 00:24:12.465 sys 0m0.404s 00:24:12.465 07:32:50 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:12.465 07:32:50 thread -- common/autotest_common.sh@10 -- # set +x 00:24:12.465 07:32:50 -- common/autotest_common.sh@1142 -- # return 0 00:24:12.465 07:32:50 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:24:12.465 07:32:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:12.465 07:32:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:12.465 07:32:50 -- common/autotest_common.sh@10 -- # set +x 00:24:12.465 ************************************ 00:24:12.465 START TEST accel 00:24:12.465 ************************************ 00:24:12.465 07:32:50 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:24:12.465 * Looking for test storage... 00:24:12.465 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:24:12.465 07:32:50 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:24:12.465 07:32:50 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:24:12.465 07:32:50 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:24:12.465 07:32:50 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=65012 00:24:12.465 07:32:50 accel -- accel/accel.sh@63 -- # waitforlisten 65012 00:24:12.465 07:32:50 accel -- common/autotest_common.sh@829 -- # '[' -z 65012 ']' 00:24:12.465 07:32:50 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.465 07:32:50 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:12.465 07:32:50 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:12.465 07:32:50 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:12.465 07:32:50 accel -- accel/accel.sh@61 -- # build_accel_config 00:24:12.465 07:32:50 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:24:12.465 07:32:50 accel -- common/autotest_common.sh@10 -- # set +x 00:24:12.465 07:32:50 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:12.465 07:32:50 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:12.465 07:32:50 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:12.465 07:32:50 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:12.465 07:32:50 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:12.465 07:32:50 accel -- accel/accel.sh@40 -- # local IFS=, 00:24:12.465 07:32:50 accel -- accel/accel.sh@41 -- # jq -r . 00:24:12.465 [2024-07-15 07:32:50.918251] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:24:12.465 [2024-07-15 07:32:50.918437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65012 ] 00:24:12.723 [2024-07-15 07:32:51.093982] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.982 [2024-07-15 07:32:51.372048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.918 07:32:52 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:13.918 07:32:52 accel -- common/autotest_common.sh@862 -- # return 0 00:24:13.918 07:32:52 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:24:13.918 07:32:52 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:24:13.918 07:32:52 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:24:13.918 07:32:52 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:24:13.918 07:32:52 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:24:13.918 07:32:52 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:24:13.918 07:32:52 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:24:13.918 07:32:52 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.918 07:32:52 accel -- common/autotest_common.sh@10 -- # set +x 00:24:13.918 07:32:52 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.918 07:32:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:24:13.918 07:32:52 accel -- accel/accel.sh@72 -- # IFS== 00:24:13.918 07:32:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:24:13.918 07:32:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:24:13.918 07:32:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:24:13.918 07:32:52 accel -- accel/accel.sh@72 -- # IFS== 00:24:13.919 07:32:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:24:13.919 07:32:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:24:13.919 07:32:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:24:13.919 07:32:52 accel -- accel/accel.sh@72 -- # IFS== 00:24:13.919 07:32:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:24:13.919 07:32:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:24:13.919 07:32:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:24:13.919 07:32:52 accel -- accel/accel.sh@72 -- # IFS== 00:24:13.919 07:32:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:24:13.919 07:32:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:24:13.919 07:32:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:24:13.919 07:32:52 accel -- accel/accel.sh@72 -- # IFS== 00:24:13.919 07:32:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:24:13.919 07:32:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:24:13.919 07:32:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:24:13.919 07:32:52 accel -- accel/accel.sh@72 -- # IFS== 00:24:13.919 07:32:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:24:13.919 07:32:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:24:13.919 07:32:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:24:13.919 07:32:52 accel -- accel/accel.sh@72 -- # IFS== 00:24:13.919 07:32:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:24:13.919 07:32:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:24:13.919 07:32:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:24:13.919 07:32:52 accel -- accel/accel.sh@72 -- # IFS== 00:24:13.919 07:32:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:24:13.919 07:32:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:24:13.919 07:32:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:24:13.919 07:32:52 accel -- accel/accel.sh@72 -- # IFS== 00:24:13.919 07:32:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:24:13.919 07:32:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:24:13.919 07:32:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:24:13.919 07:32:52 accel -- accel/accel.sh@72 -- # IFS== 00:24:13.919 07:32:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:24:13.919 07:32:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:24:13.919 07:32:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:24:13.919 07:32:52 accel -- accel/accel.sh@72 -- # IFS== 00:24:13.919 07:32:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:24:13.919 07:32:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:24:13.919 07:32:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:24:13.919 07:32:52 accel -- accel/accel.sh@72 -- # IFS== 00:24:13.919 07:32:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:24:13.919 07:32:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:24:13.919 07:32:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:24:13.919 07:32:52 accel -- accel/accel.sh@72 -- # IFS== 00:24:13.919 07:32:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:24:13.919 07:32:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:24:13.919 07:32:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:24:13.919 07:32:52 accel -- accel/accel.sh@72 -- # IFS== 00:24:13.919 07:32:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:24:13.919 07:32:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:24:13.919 07:32:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:24:13.919 07:32:52 accel -- accel/accel.sh@72 -- # IFS== 00:24:13.919 07:32:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:24:13.919 07:32:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:24:13.919 07:32:52 accel -- accel/accel.sh@75 -- # killprocess 65012 00:24:13.919 07:32:52 accel -- common/autotest_common.sh@948 -- # '[' -z 65012 ']' 00:24:13.919 07:32:52 accel -- common/autotest_common.sh@952 -- # kill -0 65012 00:24:13.919 07:32:52 accel -- common/autotest_common.sh@953 -- # uname 00:24:13.919 07:32:52 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:13.919 07:32:52 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65012 00:24:13.919 07:32:52 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:13.919 killing process with pid 65012 00:24:13.919 07:32:52 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:13.919 07:32:52 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65012' 00:24:13.919 07:32:52 accel -- common/autotest_common.sh@967 -- # kill 65012 00:24:13.919 07:32:52 accel -- common/autotest_common.sh@972 -- # wait 65012 00:24:16.454 07:32:54 accel -- accel/accel.sh@76 -- # trap - ERR 00:24:16.454 07:32:54 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:24:16.454 07:32:54 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:16.454 07:32:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:16.454 07:32:54 accel -- common/autotest_common.sh@10 -- # set +x 00:24:16.454 07:32:54 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:24:16.454 07:32:54 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:24:16.454 07:32:54 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:24:16.454 07:32:54 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:16.454 07:32:54 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:16.454 07:32:54 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:16.454 07:32:54 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:16.454 07:32:54 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:16.454 07:32:54 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:24:16.454 07:32:54 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:24:16.454 07:32:54 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:16.454 07:32:54 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:24:16.454 07:32:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:24:16.454 07:32:55 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:24:16.454 07:32:55 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:24:16.454 07:32:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:16.454 07:32:55 accel -- common/autotest_common.sh@10 -- # set +x 00:24:16.454 ************************************ 00:24:16.454 START TEST accel_missing_filename 00:24:16.454 ************************************ 00:24:16.454 07:32:55 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:24:16.454 07:32:55 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:24:16.454 07:32:55 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:24:16.454 07:32:55 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:24:16.454 07:32:55 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:16.454 07:32:55 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:24:16.454 07:32:55 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:16.454 07:32:55 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:24:16.454 07:32:55 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:24:16.454 07:32:55 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:24:16.454 07:32:55 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:16.454 07:32:55 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:16.454 07:32:55 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:16.454 07:32:55 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:16.454 07:32:55 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:16.454 07:32:55 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:24:16.454 07:32:55 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:24:16.712 [2024-07-15 07:32:55.110679] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:24:16.712 [2024-07-15 07:32:55.110875] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65099 ] 00:24:16.712 [2024-07-15 07:32:55.295158] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.280 [2024-07-15 07:32:55.593245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.280 [2024-07-15 07:32:55.845935] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:17.891 [2024-07-15 07:32:56.418451] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:24:18.457 A filename is required. 00:24:18.457 07:32:56 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:24:18.457 07:32:56 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:18.457 07:32:56 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:24:18.457 07:32:56 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:24:18.457 07:32:56 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:24:18.458 07:32:56 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:18.458 00:24:18.458 real 0m1.840s 00:24:18.458 user 0m1.480s 00:24:18.458 sys 0m0.301s 00:24:18.458 07:32:56 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:18.458 07:32:56 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:24:18.458 ************************************ 00:24:18.458 END TEST accel_missing_filename 00:24:18.458 ************************************ 00:24:18.458 07:32:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:24:18.458 07:32:56 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:24:18.458 07:32:56 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:24:18.458 07:32:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:18.458 07:32:56 accel -- common/autotest_common.sh@10 -- # set +x 00:24:18.458 ************************************ 00:24:18.458 START TEST accel_compress_verify 00:24:18.458 ************************************ 00:24:18.458 07:32:56 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:24:18.458 07:32:56 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:24:18.458 07:32:56 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:24:18.458 07:32:56 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:24:18.458 07:32:56 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:18.458 07:32:56 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:24:18.458 07:32:56 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:18.458 07:32:56 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:24:18.458 07:32:56 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:24:18.458 07:32:56 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:24:18.458 07:32:56 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:18.458 07:32:56 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:18.458 07:32:56 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:18.458 07:32:56 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:18.458 07:32:56 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:18.458 07:32:56 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:24:18.458 07:32:56 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:24:18.458 [2024-07-15 07:32:57.004143] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:24:18.458 [2024-07-15 07:32:57.004328] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65135 ] 00:24:18.716 [2024-07-15 07:32:57.187882] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.975 [2024-07-15 07:32:57.468500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.234 [2024-07-15 07:32:57.707762] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:19.801 [2024-07-15 07:32:58.270533] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:24:20.371 00:24:20.371 Compression does not support the verify option, aborting. 00:24:20.371 ************************************ 00:24:20.371 END TEST accel_compress_verify 00:24:20.371 ************************************ 00:24:20.371 07:32:58 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:24:20.371 07:32:58 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:20.371 07:32:58 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:24:20.371 07:32:58 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:24:20.371 07:32:58 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:24:20.371 07:32:58 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:20.371 00:24:20.371 real 0m1.787s 00:24:20.371 user 0m1.464s 00:24:20.371 sys 0m0.263s 00:24:20.371 07:32:58 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:20.371 07:32:58 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:24:20.371 07:32:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:24:20.371 07:32:58 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:24:20.371 07:32:58 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:24:20.371 07:32:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:20.371 07:32:58 accel -- common/autotest_common.sh@10 -- # set +x 00:24:20.371 ************************************ 00:24:20.371 START TEST accel_wrong_workload 00:24:20.371 ************************************ 00:24:20.371 07:32:58 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:24:20.371 07:32:58 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:24:20.371 07:32:58 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:24:20.371 07:32:58 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:24:20.371 07:32:58 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:20.371 07:32:58 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:24:20.371 07:32:58 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:20.371 07:32:58 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:24:20.371 07:32:58 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:24:20.371 07:32:58 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:24:20.371 07:32:58 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:20.371 07:32:58 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:20.371 07:32:58 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:20.371 07:32:58 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:20.371 07:32:58 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:20.371 07:32:58 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:24:20.371 07:32:58 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:24:20.371 Unsupported workload type: foobar 00:24:20.371 [2024-07-15 07:32:58.827785] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:24:20.371 accel_perf options: 00:24:20.371 [-h help message] 00:24:20.371 [-q queue depth per core] 00:24:20.371 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:24:20.371 [-T number of threads per core 00:24:20.371 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:24:20.371 [-t time in seconds] 00:24:20.371 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:24:20.371 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:24:20.371 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:24:20.371 [-l for compress/decompress workloads, name of uncompressed input file 00:24:20.371 [-S for crc32c workload, use this seed value (default 0) 00:24:20.371 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:24:20.371 [-f for fill workload, use this BYTE value (default 255) 00:24:20.371 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:24:20.371 [-y verify result if this switch is on] 00:24:20.371 [-a tasks to allocate per core (default: same value as -q)] 00:24:20.371 Can be used to spread operations across a wider range of memory. 00:24:20.371 07:32:58 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:24:20.371 07:32:58 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:20.371 07:32:58 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:20.371 07:32:58 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:20.371 00:24:20.371 real 0m0.069s 00:24:20.371 user 0m0.076s 00:24:20.371 sys 0m0.041s 00:24:20.371 07:32:58 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:20.371 07:32:58 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:24:20.371 ************************************ 00:24:20.371 END TEST accel_wrong_workload 00:24:20.371 ************************************ 00:24:20.371 07:32:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:24:20.371 07:32:58 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:24:20.371 07:32:58 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:24:20.371 07:32:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:20.371 07:32:58 accel -- common/autotest_common.sh@10 -- # set +x 00:24:20.371 ************************************ 00:24:20.371 START TEST accel_negative_buffers 00:24:20.371 ************************************ 00:24:20.371 07:32:58 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:24:20.371 07:32:58 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:24:20.371 07:32:58 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:24:20.371 07:32:58 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:24:20.371 07:32:58 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:20.371 07:32:58 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:24:20.371 07:32:58 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:20.371 07:32:58 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:24:20.371 07:32:58 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:24:20.371 07:32:58 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:24:20.371 07:32:58 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:20.371 07:32:58 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:20.371 07:32:58 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:20.371 07:32:58 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:20.371 07:32:58 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:20.371 07:32:58 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:24:20.371 07:32:58 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:24:20.371 -x option must be non-negative. 00:24:20.371 [2024-07-15 07:32:58.948168] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:24:20.371 accel_perf options: 00:24:20.371 [-h help message] 00:24:20.371 [-q queue depth per core] 00:24:20.371 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:24:20.371 [-T number of threads per core 00:24:20.371 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:24:20.371 [-t time in seconds] 00:24:20.371 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:24:20.371 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:24:20.371 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:24:20.371 [-l for compress/decompress workloads, name of uncompressed input file 00:24:20.371 [-S for crc32c workload, use this seed value (default 0) 00:24:20.371 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:24:20.371 [-f for fill workload, use this BYTE value (default 255) 00:24:20.371 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:24:20.371 [-y verify result if this switch is on] 00:24:20.371 [-a tasks to allocate per core (default: same value as -q)] 00:24:20.371 Can be used to spread operations across a wider range of memory. 00:24:20.371 07:32:58 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:24:20.371 07:32:58 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:20.371 07:32:58 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:20.371 07:32:58 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:20.371 00:24:20.371 real 0m0.079s 00:24:20.371 user 0m0.094s 00:24:20.371 sys 0m0.039s 00:24:20.371 ************************************ 00:24:20.371 END TEST accel_negative_buffers 00:24:20.371 ************************************ 00:24:20.371 07:32:58 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:20.371 07:32:58 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:24:20.628 07:32:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:24:20.628 07:32:59 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:24:20.628 07:32:59 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:24:20.628 07:32:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:20.628 07:32:59 accel -- common/autotest_common.sh@10 -- # set +x 00:24:20.628 ************************************ 00:24:20.628 START TEST accel_crc32c 00:24:20.628 ************************************ 00:24:20.628 07:32:59 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:24:20.628 07:32:59 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:24:20.628 07:32:59 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:24:20.628 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:20.628 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:20.628 07:32:59 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:24:20.628 07:32:59 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:24:20.628 07:32:59 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:24:20.628 07:32:59 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:20.628 07:32:59 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:20.628 07:32:59 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:20.628 07:32:59 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:20.628 07:32:59 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:20.628 07:32:59 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:24:20.628 07:32:59 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:24:20.628 [2024-07-15 07:32:59.079092] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:24:20.628 [2024-07-15 07:32:59.079253] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65213 ] 00:24:20.885 [2024-07-15 07:32:59.253726] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.141 [2024-07-15 07:32:59.623491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.398 07:32:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:24:21.398 07:32:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:21.398 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:21.398 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:21.398 07:32:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:24:21.398 07:32:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:21.398 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:21.398 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:21.398 07:32:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:24:21.398 07:32:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:21.398 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:21.398 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:21.398 07:32:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:21.399 07:32:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:23.297 07:33:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:24:23.297 07:33:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:23.297 07:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:23.297 07:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:23.297 07:33:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:24:23.297 07:33:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:23.297 07:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:23.297 07:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:23.297 07:33:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:24:23.297 07:33:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:23.297 07:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:23.297 07:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:23.297 07:33:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:24:23.297 07:33:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:23.297 07:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:23.297 07:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:23.297 07:33:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:24:23.297 07:33:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:23.297 07:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:23.297 07:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:23.297 07:33:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:24:23.297 07:33:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:23.297 07:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:23.297 07:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:23.297 07:33:01 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:23.297 07:33:01 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:24:23.297 07:33:01 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:23.297 00:24:23.297 real 0m2.839s 00:24:23.297 user 0m2.491s 00:24:23.297 sys 0m0.247s 00:24:23.297 07:33:01 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:23.297 07:33:01 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:24:23.297 ************************************ 00:24:23.297 END TEST accel_crc32c 00:24:23.297 ************************************ 00:24:23.555 07:33:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:24:23.555 07:33:01 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:24:23.555 07:33:01 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:24:23.555 07:33:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:23.555 07:33:01 accel -- common/autotest_common.sh@10 -- # set +x 00:24:23.555 ************************************ 00:24:23.555 START TEST accel_crc32c_C2 00:24:23.555 ************************************ 00:24:23.555 07:33:01 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:24:23.555 07:33:01 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:24:23.555 07:33:01 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:24:23.555 07:33:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:23.555 07:33:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:23.555 07:33:01 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:24:23.555 07:33:01 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:24:23.555 07:33:01 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:24:23.555 07:33:01 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:23.555 07:33:01 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:23.555 07:33:01 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:23.555 07:33:01 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:23.555 07:33:01 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:23.555 07:33:01 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:24:23.555 07:33:01 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:24:23.555 [2024-07-15 07:33:01.971353] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:24:23.555 [2024-07-15 07:33:01.971532] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65264 ] 00:24:23.555 [2024-07-15 07:33:02.138803] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.121 [2024-07-15 07:33:02.429238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:24.121 07:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:26.648 07:33:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:26.648 07:33:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:26.648 07:33:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:26.648 07:33:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:26.648 07:33:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:26.648 07:33:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:26.648 07:33:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:26.648 07:33:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:26.648 07:33:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:26.648 07:33:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:26.648 07:33:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:26.648 07:33:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:26.648 07:33:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:26.648 07:33:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:26.648 07:33:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:26.648 07:33:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:26.648 07:33:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:26.648 07:33:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:26.648 07:33:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:26.648 07:33:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:26.648 07:33:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:26.648 07:33:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:26.648 07:33:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:26.648 07:33:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:26.648 07:33:04 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:26.648 07:33:04 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:24:26.648 07:33:04 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:26.648 00:24:26.649 real 0m2.800s 00:24:26.649 user 0m2.437s 00:24:26.649 sys 0m0.262s 00:24:26.649 07:33:04 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:26.649 07:33:04 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:24:26.649 ************************************ 00:24:26.649 END TEST accel_crc32c_C2 00:24:26.649 ************************************ 00:24:26.649 07:33:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:24:26.649 07:33:04 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:24:26.649 07:33:04 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:24:26.649 07:33:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:26.649 07:33:04 accel -- common/autotest_common.sh@10 -- # set +x 00:24:26.649 ************************************ 00:24:26.649 START TEST accel_copy 00:24:26.649 ************************************ 00:24:26.649 07:33:04 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:24:26.649 07:33:04 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:24:26.649 07:33:04 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:24:26.649 07:33:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:26.649 07:33:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:26.649 07:33:04 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:24:26.649 07:33:04 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:24:26.649 07:33:04 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:24:26.649 07:33:04 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:26.649 07:33:04 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:26.649 07:33:04 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:26.649 07:33:04 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:26.649 07:33:04 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:26.649 07:33:04 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:24:26.649 07:33:04 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:24:26.649 [2024-07-15 07:33:04.821257] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:24:26.649 [2024-07-15 07:33:04.821422] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65312 ] 00:24:26.649 [2024-07-15 07:33:04.991419] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.907 [2024-07-15 07:33:05.275660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:26.907 07:33:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:27.165 07:33:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:27.165 07:33:05 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:24:27.165 07:33:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:27.165 07:33:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:27.165 07:33:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:27.165 07:33:05 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:24:27.165 07:33:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:27.165 07:33:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:27.165 07:33:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:27.165 07:33:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:24:27.165 07:33:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:27.165 07:33:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:27.165 07:33:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:27.165 07:33:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:24:27.165 07:33:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:27.165 07:33:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:27.166 07:33:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:29.066 07:33:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:24:29.066 07:33:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:29.066 07:33:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:29.066 07:33:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:29.066 07:33:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:24:29.066 07:33:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:29.066 07:33:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:29.066 07:33:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:29.066 07:33:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:24:29.066 07:33:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:29.066 07:33:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:29.066 07:33:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:29.066 07:33:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:24:29.066 07:33:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:29.066 07:33:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:29.066 07:33:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:29.066 07:33:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:24:29.066 07:33:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:29.066 07:33:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:29.066 07:33:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:29.066 07:33:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:24:29.066 07:33:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:29.066 07:33:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:24:29.066 07:33:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:24:29.066 07:33:07 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:29.066 07:33:07 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:24:29.066 07:33:07 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:29.066 00:24:29.066 real 0m2.751s 00:24:29.066 user 0m2.415s 00:24:29.066 sys 0m0.234s 00:24:29.066 07:33:07 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:29.066 07:33:07 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:24:29.066 ************************************ 00:24:29.066 END TEST accel_copy 00:24:29.066 ************************************ 00:24:29.066 07:33:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:24:29.066 07:33:07 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:24:29.066 07:33:07 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:24:29.066 07:33:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:29.066 07:33:07 accel -- common/autotest_common.sh@10 -- # set +x 00:24:29.066 ************************************ 00:24:29.066 START TEST accel_fill 00:24:29.066 ************************************ 00:24:29.066 07:33:07 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:24:29.066 07:33:07 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:24:29.066 07:33:07 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:24:29.066 07:33:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:29.066 07:33:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:29.066 07:33:07 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:24:29.066 07:33:07 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:24:29.066 07:33:07 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:24:29.066 07:33:07 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:29.066 07:33:07 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:29.066 07:33:07 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:29.066 07:33:07 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:29.066 07:33:07 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:29.066 07:33:07 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:24:29.066 07:33:07 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:24:29.066 [2024-07-15 07:33:07.629480] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:24:29.066 [2024-07-15 07:33:07.629670] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65364 ] 00:24:29.325 [2024-07-15 07:33:07.810172] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.583 [2024-07-15 07:33:08.082694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.841 07:33:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:24:29.841 07:33:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:29.841 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:29.841 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:29.841 07:33:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:24:29.841 07:33:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:29.841 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:29.841 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:29.841 07:33:08 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:24:29.841 07:33:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:29.841 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:29.841 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:29.841 07:33:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:24:29.841 07:33:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:29.841 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:29.842 07:33:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:31.741 07:33:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:24:31.741 07:33:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:31.741 07:33:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:31.741 07:33:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:31.741 07:33:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:24:31.741 07:33:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:31.741 07:33:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:31.741 07:33:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:31.741 07:33:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:24:31.741 07:33:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:31.741 07:33:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:31.741 07:33:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:31.741 07:33:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:24:31.741 07:33:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:31.741 07:33:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:31.741 07:33:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:31.741 07:33:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:24:31.741 07:33:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:31.741 07:33:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:31.741 07:33:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:31.741 07:33:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:24:31.741 07:33:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:24:31.741 07:33:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:24:31.741 07:33:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:24:31.741 07:33:10 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:31.742 07:33:10 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:24:31.742 07:33:10 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:31.742 00:24:31.742 real 0m2.733s 00:24:31.742 user 0m2.388s 00:24:31.742 sys 0m0.246s 00:24:31.742 07:33:10 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:31.742 07:33:10 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:24:31.742 ************************************ 00:24:31.742 END TEST accel_fill 00:24:31.742 ************************************ 00:24:31.742 07:33:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:24:31.742 07:33:10 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:24:31.742 07:33:10 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:24:31.742 07:33:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:31.742 07:33:10 accel -- common/autotest_common.sh@10 -- # set +x 00:24:31.742 ************************************ 00:24:31.742 START TEST accel_copy_crc32c 00:24:31.742 ************************************ 00:24:31.742 07:33:10 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:24:31.742 07:33:10 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:24:31.742 07:33:10 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:24:31.742 07:33:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:31.742 07:33:10 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:24:31.742 07:33:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:31.742 07:33:10 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:24:31.999 07:33:10 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:24:31.999 07:33:10 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:31.999 07:33:10 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:31.999 07:33:10 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:31.999 07:33:10 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:31.999 07:33:10 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:31.999 07:33:10 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:24:31.999 07:33:10 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:24:31.999 [2024-07-15 07:33:10.427583] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:24:31.999 [2024-07-15 07:33:10.428042] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65411 ] 00:24:31.999 [2024-07-15 07:33:10.606847] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.565 [2024-07-15 07:33:10.872525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.565 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:24:32.565 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:32.565 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:32.565 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:32.565 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:24:32.565 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:32.565 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:32.565 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:32.565 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:24:32.565 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:32.565 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:32.565 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:32.565 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:24:32.565 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:32.565 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:32.565 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:32.565 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:24:32.565 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:32.565 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:32.565 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:32.566 07:33:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:35.093 07:33:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:24:35.093 07:33:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:35.093 07:33:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:35.093 07:33:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:35.093 07:33:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:24:35.093 07:33:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:35.093 07:33:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:35.093 07:33:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:35.093 07:33:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:24:35.093 07:33:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:35.093 07:33:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:35.093 07:33:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:35.093 07:33:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:24:35.093 07:33:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:35.093 07:33:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:35.093 07:33:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:35.093 07:33:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:24:35.093 07:33:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:35.093 07:33:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:35.093 07:33:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:35.093 07:33:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:24:35.093 07:33:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:24:35.093 07:33:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:24:35.093 07:33:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:24:35.093 07:33:13 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:35.093 07:33:13 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:24:35.093 07:33:13 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:35.093 00:24:35.093 real 0m2.748s 00:24:35.093 user 0m2.397s 00:24:35.093 sys 0m0.249s 00:24:35.093 07:33:13 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:35.093 07:33:13 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:24:35.093 ************************************ 00:24:35.093 END TEST accel_copy_crc32c 00:24:35.093 ************************************ 00:24:35.093 07:33:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:24:35.094 07:33:13 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:24:35.094 07:33:13 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:24:35.094 07:33:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:35.094 07:33:13 accel -- common/autotest_common.sh@10 -- # set +x 00:24:35.094 ************************************ 00:24:35.094 START TEST accel_copy_crc32c_C2 00:24:35.094 ************************************ 00:24:35.094 07:33:13 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:24:35.094 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:24:35.094 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:24:35.094 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:35.094 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:24:35.094 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:35.094 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:24:35.094 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:24:35.094 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:35.094 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:35.094 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:35.094 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:35.094 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:35.094 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:24:35.094 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:24:35.094 [2024-07-15 07:33:13.197545] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:24:35.094 [2024-07-15 07:33:13.197700] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65457 ] 00:24:35.094 [2024-07-15 07:33:13.364760] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.094 [2024-07-15 07:33:13.671083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:35.352 07:33:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:37.878 07:33:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:37.878 07:33:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:37.878 07:33:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:37.878 07:33:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:37.878 07:33:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:37.878 07:33:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:37.878 07:33:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:37.878 07:33:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:37.878 07:33:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:37.878 07:33:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:37.878 07:33:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:37.878 07:33:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:37.878 07:33:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:37.878 07:33:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:37.878 07:33:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:37.878 07:33:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:37.878 07:33:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:37.878 07:33:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:37.878 07:33:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:37.878 07:33:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:37.878 07:33:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:24:37.878 07:33:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:24:37.878 07:33:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:24:37.878 07:33:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:24:37.878 07:33:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:37.878 07:33:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:24:37.878 07:33:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:37.878 00:24:37.878 real 0m2.754s 00:24:37.878 user 0m2.410s 00:24:37.878 sys 0m0.246s 00:24:37.878 07:33:15 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:37.878 07:33:15 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:24:37.878 ************************************ 00:24:37.878 END TEST accel_copy_crc32c_C2 00:24:37.878 ************************************ 00:24:37.878 07:33:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:24:37.878 07:33:15 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:24:37.878 07:33:15 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:24:37.878 07:33:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:37.878 07:33:15 accel -- common/autotest_common.sh@10 -- # set +x 00:24:37.878 ************************************ 00:24:37.878 START TEST accel_dualcast 00:24:37.878 ************************************ 00:24:37.878 07:33:15 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:24:37.878 07:33:15 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:24:37.879 07:33:15 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:24:37.879 07:33:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:37.879 07:33:15 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:24:37.879 07:33:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:37.879 07:33:15 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:24:37.879 07:33:15 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:24:37.879 07:33:15 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:37.879 07:33:15 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:37.879 07:33:15 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:37.879 07:33:15 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:37.879 07:33:15 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:37.879 07:33:15 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:24:37.879 07:33:15 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:24:37.879 [2024-07-15 07:33:15.994329] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:24:37.879 [2024-07-15 07:33:15.994518] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65509 ] 00:24:37.879 [2024-07-15 07:33:16.164153] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.879 [2024-07-15 07:33:16.438145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:24:38.136 07:33:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:38.137 07:33:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:38.137 07:33:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:38.137 07:33:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:24:38.137 07:33:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:38.137 07:33:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:38.137 07:33:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:38.137 07:33:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:24:38.137 07:33:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:38.137 07:33:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:38.137 07:33:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:38.137 07:33:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:24:38.137 07:33:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:38.137 07:33:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:38.137 07:33:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:40.667 07:33:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:24:40.667 07:33:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:40.667 07:33:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:40.667 07:33:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:40.667 07:33:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:24:40.667 07:33:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:40.667 07:33:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:40.667 07:33:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:40.667 07:33:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:24:40.667 07:33:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:40.667 07:33:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:40.667 07:33:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:40.667 07:33:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:24:40.667 07:33:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:40.667 07:33:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:40.667 07:33:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:40.667 07:33:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:24:40.667 07:33:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:40.667 07:33:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:40.667 07:33:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:40.667 07:33:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:24:40.667 07:33:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:24:40.667 07:33:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:24:40.667 07:33:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:24:40.667 07:33:18 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:40.667 07:33:18 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:24:40.667 07:33:18 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:40.667 00:24:40.667 real 0m2.724s 00:24:40.667 user 0m2.401s 00:24:40.667 sys 0m0.225s 00:24:40.667 07:33:18 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:40.667 07:33:18 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:24:40.667 ************************************ 00:24:40.667 END TEST accel_dualcast 00:24:40.667 ************************************ 00:24:40.667 07:33:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:24:40.667 07:33:18 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:24:40.667 07:33:18 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:24:40.667 07:33:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:40.667 07:33:18 accel -- common/autotest_common.sh@10 -- # set +x 00:24:40.667 ************************************ 00:24:40.667 START TEST accel_compare 00:24:40.667 ************************************ 00:24:40.667 07:33:18 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:24:40.667 07:33:18 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:24:40.667 07:33:18 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:24:40.667 07:33:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:40.667 07:33:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:40.667 07:33:18 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:24:40.667 07:33:18 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:24:40.667 07:33:18 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:24:40.667 07:33:18 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:40.667 07:33:18 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:40.667 07:33:18 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:40.667 07:33:18 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:40.667 07:33:18 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:40.667 07:33:18 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:24:40.667 07:33:18 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:24:40.667 [2024-07-15 07:33:18.778834] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:24:40.667 [2024-07-15 07:33:18.779021] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65556 ] 00:24:40.667 [2024-07-15 07:33:18.960482] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.667 [2024-07-15 07:33:19.232319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:40.923 07:33:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:43.443 07:33:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:24:43.443 07:33:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:43.443 07:33:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:43.443 07:33:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:43.443 07:33:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:24:43.443 07:33:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:43.443 07:33:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:43.443 07:33:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:43.443 07:33:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:24:43.443 07:33:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:43.443 07:33:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:43.443 07:33:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:43.443 07:33:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:24:43.443 07:33:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:43.443 07:33:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:43.443 07:33:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:43.443 07:33:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:24:43.443 07:33:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:43.443 07:33:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:43.443 07:33:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:43.443 07:33:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:24:43.443 07:33:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:24:43.443 07:33:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:24:43.443 07:33:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:24:43.443 07:33:21 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:43.443 07:33:21 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:24:43.443 07:33:21 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:43.443 00:24:43.443 real 0m2.763s 00:24:43.443 user 0m2.430s 00:24:43.443 sys 0m0.235s 00:24:43.443 07:33:21 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:43.443 ************************************ 00:24:43.443 END TEST accel_compare 00:24:43.443 ************************************ 00:24:43.443 07:33:21 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:24:43.443 07:33:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:24:43.443 07:33:21 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:24:43.443 07:33:21 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:24:43.443 07:33:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:43.443 07:33:21 accel -- common/autotest_common.sh@10 -- # set +x 00:24:43.443 ************************************ 00:24:43.443 START TEST accel_xor 00:24:43.443 ************************************ 00:24:43.443 07:33:21 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:24:43.443 07:33:21 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:24:43.443 07:33:21 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:24:43.443 07:33:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:43.443 07:33:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:43.443 07:33:21 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:24:43.443 07:33:21 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:24:43.443 07:33:21 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:24:43.443 07:33:21 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:43.443 07:33:21 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:43.443 07:33:21 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:43.443 07:33:21 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:43.443 07:33:21 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:43.443 07:33:21 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:24:43.443 07:33:21 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:24:43.443 [2024-07-15 07:33:21.588670] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:24:43.443 [2024-07-15 07:33:21.588889] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65607 ] 00:24:43.443 [2024-07-15 07:33:21.768201] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.443 [2024-07-15 07:33:22.043278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:43.701 07:33:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:46.222 07:33:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:46.222 07:33:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:46.222 07:33:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:46.222 07:33:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:46.222 07:33:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:46.222 07:33:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:46.222 07:33:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:46.222 07:33:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:46.222 07:33:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:46.222 07:33:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:46.222 07:33:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:46.222 07:33:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:46.222 07:33:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:46.222 07:33:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:46.222 07:33:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:46.222 07:33:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:46.222 07:33:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:46.222 07:33:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:46.222 07:33:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:46.222 07:33:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:46.223 07:33:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:46.223 07:33:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:46.223 07:33:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:46.223 07:33:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:46.223 07:33:24 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:46.223 07:33:24 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:24:46.223 07:33:24 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:46.223 00:24:46.223 real 0m2.741s 00:24:46.223 user 0m2.396s 00:24:46.223 sys 0m0.247s 00:24:46.223 07:33:24 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:46.223 07:33:24 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:24:46.223 ************************************ 00:24:46.223 END TEST accel_xor 00:24:46.223 ************************************ 00:24:46.223 07:33:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:24:46.223 07:33:24 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:24:46.223 07:33:24 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:24:46.223 07:33:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:46.223 07:33:24 accel -- common/autotest_common.sh@10 -- # set +x 00:24:46.223 ************************************ 00:24:46.223 START TEST accel_xor 00:24:46.223 ************************************ 00:24:46.223 07:33:24 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:24:46.223 07:33:24 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:24:46.223 07:33:24 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:24:46.223 07:33:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:46.223 07:33:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:46.223 07:33:24 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:24:46.223 07:33:24 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:24:46.223 07:33:24 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:24:46.223 07:33:24 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:46.223 07:33:24 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:46.223 07:33:24 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:46.223 07:33:24 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:46.223 07:33:24 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:46.223 07:33:24 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:24:46.223 07:33:24 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:24:46.223 [2024-07-15 07:33:24.389048] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:24:46.223 [2024-07-15 07:33:24.389213] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65653 ] 00:24:46.223 [2024-07-15 07:33:24.556852] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.479 [2024-07-15 07:33:24.838533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:46.479 07:33:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:49.003 07:33:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:49.003 07:33:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:49.003 07:33:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:49.003 07:33:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:49.003 07:33:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:49.003 07:33:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:49.003 07:33:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:49.003 07:33:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:49.003 07:33:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:49.003 07:33:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:49.003 07:33:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:49.003 07:33:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:49.003 07:33:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:49.003 07:33:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:49.003 07:33:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:49.003 07:33:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:49.003 07:33:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:49.003 07:33:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:49.003 07:33:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:49.003 07:33:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:49.003 07:33:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:24:49.003 07:33:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:24:49.003 07:33:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:24:49.003 07:33:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:24:49.003 07:33:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:49.003 07:33:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:24:49.003 07:33:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:49.003 00:24:49.003 real 0m2.747s 00:24:49.003 user 0m2.405s 00:24:49.003 sys 0m0.240s 00:24:49.003 07:33:27 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:49.003 07:33:27 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:24:49.003 ************************************ 00:24:49.003 END TEST accel_xor 00:24:49.003 ************************************ 00:24:49.003 07:33:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:24:49.003 07:33:27 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:24:49.003 07:33:27 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:24:49.003 07:33:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:49.003 07:33:27 accel -- common/autotest_common.sh@10 -- # set +x 00:24:49.003 ************************************ 00:24:49.003 START TEST accel_dif_verify 00:24:49.003 ************************************ 00:24:49.003 07:33:27 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:24:49.003 07:33:27 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:24:49.003 07:33:27 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:24:49.003 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:49.003 07:33:27 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:24:49.003 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:49.003 07:33:27 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:24:49.003 07:33:27 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:24:49.003 07:33:27 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:49.003 07:33:27 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:49.003 07:33:27 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:49.003 07:33:27 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:49.003 07:33:27 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:49.003 07:33:27 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:24:49.003 07:33:27 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:24:49.003 [2024-07-15 07:33:27.161978] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:24:49.003 [2024-07-15 07:33:27.162155] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65701 ] 00:24:49.003 [2024-07-15 07:33:27.327123] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.003 [2024-07-15 07:33:27.599218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:49.261 07:33:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:51.789 07:33:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:24:51.789 07:33:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:51.789 07:33:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:51.789 07:33:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:51.789 07:33:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:24:51.789 07:33:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:51.789 07:33:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:51.790 07:33:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:51.790 07:33:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:24:51.790 07:33:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:51.790 07:33:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:51.790 07:33:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:51.790 07:33:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:24:51.790 07:33:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:51.790 07:33:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:51.790 07:33:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:51.790 07:33:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:24:51.790 07:33:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:51.790 07:33:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:51.790 07:33:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:51.790 07:33:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:24:51.790 07:33:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:24:51.790 07:33:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:24:51.790 07:33:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:24:51.790 07:33:29 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:51.790 07:33:29 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:24:51.790 07:33:29 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:51.790 00:24:51.790 real 0m2.770s 00:24:51.790 user 0m2.437s 00:24:51.790 sys 0m0.237s 00:24:51.790 07:33:29 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:51.790 07:33:29 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:24:51.790 ************************************ 00:24:51.790 END TEST accel_dif_verify 00:24:51.790 ************************************ 00:24:51.790 07:33:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:24:51.790 07:33:29 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:24:51.790 07:33:29 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:24:51.790 07:33:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:51.790 07:33:29 accel -- common/autotest_common.sh@10 -- # set +x 00:24:51.790 ************************************ 00:24:51.790 START TEST accel_dif_generate 00:24:51.790 ************************************ 00:24:51.790 07:33:29 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:24:51.790 07:33:29 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:24:51.790 07:33:29 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:24:51.790 07:33:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:51.790 07:33:29 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:24:51.790 07:33:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:51.790 07:33:29 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:24:51.790 07:33:29 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:24:51.790 07:33:29 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:51.790 07:33:29 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:51.790 07:33:29 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:51.790 07:33:29 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:51.790 07:33:29 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:51.790 07:33:29 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:24:51.790 07:33:29 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:24:51.790 [2024-07-15 07:33:29.984980] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:24:51.790 [2024-07-15 07:33:29.985145] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65753 ] 00:24:51.790 [2024-07-15 07:33:30.163237] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.047 [2024-07-15 07:33:30.487240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:52.306 07:33:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:24:52.307 07:33:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:52.307 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:52.307 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:52.307 07:33:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:24:52.307 07:33:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:52.307 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:52.307 07:33:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:54.208 07:33:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:24:54.208 07:33:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:54.208 07:33:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:54.208 07:33:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:54.208 07:33:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:24:54.208 07:33:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:54.208 07:33:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:54.208 07:33:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:54.208 07:33:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:24:54.208 07:33:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:54.208 07:33:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:54.208 07:33:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:54.208 07:33:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:24:54.208 07:33:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:54.208 07:33:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:54.208 07:33:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:54.208 07:33:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:24:54.208 07:33:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:54.208 07:33:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:54.208 07:33:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:54.208 07:33:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:24:54.208 07:33:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:24:54.208 07:33:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:24:54.208 07:33:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:24:54.208 07:33:32 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:54.208 07:33:32 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:24:54.208 07:33:32 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:54.208 00:24:54.208 real 0m2.814s 00:24:54.208 user 0m2.456s 00:24:54.208 sys 0m0.261s 00:24:54.208 07:33:32 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:54.208 07:33:32 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:24:54.208 ************************************ 00:24:54.208 END TEST accel_dif_generate 00:24:54.208 ************************************ 00:24:54.208 07:33:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:24:54.208 07:33:32 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:24:54.208 07:33:32 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:24:54.208 07:33:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:54.208 07:33:32 accel -- common/autotest_common.sh@10 -- # set +x 00:24:54.208 ************************************ 00:24:54.208 START TEST accel_dif_generate_copy 00:24:54.208 ************************************ 00:24:54.208 07:33:32 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:24:54.208 07:33:32 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:24:54.208 07:33:32 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:24:54.208 07:33:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:54.208 07:33:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:54.208 07:33:32 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:24:54.208 07:33:32 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:24:54.208 07:33:32 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:24:54.208 07:33:32 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:54.208 07:33:32 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:54.208 07:33:32 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:54.208 07:33:32 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:54.208 07:33:32 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:54.208 07:33:32 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:24:54.208 07:33:32 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:24:54.466 [2024-07-15 07:33:32.849307] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:24:54.466 [2024-07-15 07:33:32.849465] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65800 ] 00:24:54.466 [2024-07-15 07:33:33.021681] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.746 [2024-07-15 07:33:33.299257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.003 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:24:55.003 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:55.003 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:55.003 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:55.003 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:24:55.003 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:55.003 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:55.003 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:55.003 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:55.004 07:33:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:57.532 07:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:24:57.532 07:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:57.532 07:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:57.532 07:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:57.532 07:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:24:57.532 07:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:57.532 07:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:57.532 07:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:57.532 07:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:24:57.532 07:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:57.532 07:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:57.532 07:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:57.532 07:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:24:57.532 07:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:57.532 07:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:57.532 07:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:57.532 07:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:24:57.532 07:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:57.532 07:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:57.532 07:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:57.532 07:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:24:57.532 07:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:24:57.532 07:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:24:57.532 07:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:24:57.532 07:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:57.532 07:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:24:57.532 07:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:57.532 00:24:57.532 real 0m2.780s 00:24:57.532 user 0m2.432s 00:24:57.532 sys 0m0.248s 00:24:57.532 07:33:35 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:57.532 07:33:35 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:24:57.532 ************************************ 00:24:57.532 END TEST accel_dif_generate_copy 00:24:57.532 ************************************ 00:24:57.532 07:33:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:24:57.532 07:33:35 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:24:57.532 07:33:35 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:24:57.532 07:33:35 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:24:57.532 07:33:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:57.532 07:33:35 accel -- common/autotest_common.sh@10 -- # set +x 00:24:57.532 ************************************ 00:24:57.532 START TEST accel_comp 00:24:57.532 ************************************ 00:24:57.532 07:33:35 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:24:57.532 07:33:35 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:24:57.532 07:33:35 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:24:57.532 07:33:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:57.532 07:33:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:57.532 07:33:35 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:24:57.532 07:33:35 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:24:57.532 07:33:35 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:24:57.532 07:33:35 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:57.532 07:33:35 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:57.532 07:33:35 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:57.532 07:33:35 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:57.532 07:33:35 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:57.532 07:33:35 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:24:57.532 07:33:35 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:24:57.532 [2024-07-15 07:33:35.692019] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:24:57.532 [2024-07-15 07:33:35.692214] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65846 ] 00:24:57.532 [2024-07-15 07:33:35.871587] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.791 [2024-07-15 07:33:36.151305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:58.048 07:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:59.948 07:33:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:24:59.948 07:33:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:59.948 07:33:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:59.948 07:33:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:59.948 07:33:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:24:59.948 07:33:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:59.948 07:33:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:59.948 07:33:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:59.948 07:33:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:24:59.948 07:33:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:59.948 07:33:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:59.948 07:33:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:59.948 07:33:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:24:59.948 07:33:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:59.948 07:33:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:59.948 07:33:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:59.948 07:33:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:24:59.948 07:33:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:59.948 07:33:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:59.948 07:33:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:59.948 07:33:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:24:59.948 07:33:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:24:59.948 07:33:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:24:59.948 07:33:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:24:59.948 07:33:38 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:24:59.948 07:33:38 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:24:59.948 07:33:38 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:59.948 00:24:59.948 real 0m2.788s 00:24:59.948 user 0m2.415s 00:24:59.948 sys 0m0.277s 00:24:59.948 ************************************ 00:24:59.948 END TEST accel_comp 00:24:59.948 ************************************ 00:24:59.948 07:33:38 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:59.948 07:33:38 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:24:59.948 07:33:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:24:59.948 07:33:38 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:24:59.948 07:33:38 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:24:59.948 07:33:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:59.948 07:33:38 accel -- common/autotest_common.sh@10 -- # set +x 00:24:59.948 ************************************ 00:24:59.948 START TEST accel_decomp 00:24:59.948 ************************************ 00:24:59.948 07:33:38 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:24:59.948 07:33:38 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:24:59.948 07:33:38 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:24:59.948 07:33:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:24:59.948 07:33:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:24:59.948 07:33:38 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:24:59.948 07:33:38 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:24:59.948 07:33:38 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:24:59.948 07:33:38 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:24:59.948 07:33:38 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:24:59.948 07:33:38 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:24:59.948 07:33:38 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:24:59.948 07:33:38 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:24:59.948 07:33:38 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:24:59.948 07:33:38 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:24:59.948 [2024-07-15 07:33:38.526337] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:24:59.948 [2024-07-15 07:33:38.526570] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65898 ] 00:25:00.206 [2024-07-15 07:33:38.706121] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.464 [2024-07-15 07:33:38.985627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:25:00.722 07:33:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:25:02.680 07:33:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:25:02.680 07:33:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:25:02.680 07:33:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:25:02.680 07:33:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:25:02.680 07:33:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:25:02.680 07:33:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:25:02.680 07:33:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:25:02.680 07:33:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:25:02.680 07:33:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:25:02.680 07:33:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:25:02.680 07:33:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:25:02.680 07:33:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:25:02.680 07:33:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:25:02.680 07:33:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:25:02.680 07:33:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:25:02.680 07:33:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:25:02.680 07:33:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:25:02.680 07:33:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:25:02.680 07:33:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:25:02.680 07:33:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:25:02.680 07:33:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:25:02.680 07:33:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:25:02.680 07:33:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:25:02.680 07:33:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:25:02.680 07:33:41 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:25:02.680 07:33:41 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:25:02.680 07:33:41 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:02.680 ************************************ 00:25:02.680 END TEST accel_decomp 00:25:02.680 ************************************ 00:25:02.680 00:25:02.680 real 0m2.756s 00:25:02.680 user 0m2.407s 00:25:02.680 sys 0m0.251s 00:25:02.680 07:33:41 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:02.680 07:33:41 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:25:02.680 07:33:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:25:02.680 07:33:41 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:25:02.680 07:33:41 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:25:02.680 07:33:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:02.680 07:33:41 accel -- common/autotest_common.sh@10 -- # set +x 00:25:02.680 ************************************ 00:25:02.680 START TEST accel_decomp_full 00:25:02.680 ************************************ 00:25:02.680 07:33:41 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:25:02.680 07:33:41 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:25:02.680 07:33:41 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:25:02.680 07:33:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:25:02.680 07:33:41 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:25:02.680 07:33:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:25:02.680 07:33:41 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:25:02.680 07:33:41 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:25:02.680 07:33:41 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:25:02.680 07:33:41 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:25:02.680 07:33:41 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:25:02.680 07:33:41 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:25:02.680 07:33:41 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:25:02.680 07:33:41 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:25:02.680 07:33:41 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:25:02.938 [2024-07-15 07:33:41.332312] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:25:02.938 [2024-07-15 07:33:41.332517] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65945 ] 00:25:02.938 [2024-07-15 07:33:41.502561] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.196 [2024-07-15 07:33:41.775628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:25:03.454 07:33:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:25:05.983 07:33:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:25:05.983 07:33:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:25:05.983 07:33:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:25:05.983 07:33:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:25:05.983 07:33:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:25:05.983 07:33:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:25:05.983 07:33:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:25:05.983 07:33:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:25:05.983 07:33:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:25:05.983 07:33:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:25:05.983 07:33:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:25:05.983 07:33:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:25:05.983 07:33:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:25:05.983 07:33:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:25:05.983 07:33:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:25:05.983 07:33:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:25:05.983 07:33:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:25:05.983 07:33:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:25:05.983 07:33:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:25:05.983 07:33:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:25:05.983 07:33:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:25:05.983 07:33:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:25:05.983 07:33:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:25:05.983 07:33:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:25:05.983 07:33:44 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:25:05.983 07:33:44 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:25:05.983 07:33:44 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:05.983 00:25:05.983 real 0m2.756s 00:25:05.983 user 0m2.424s 00:25:05.983 sys 0m0.235s 00:25:05.983 07:33:44 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:05.983 07:33:44 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:25:05.983 ************************************ 00:25:05.983 END TEST accel_decomp_full 00:25:05.983 ************************************ 00:25:05.983 07:33:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:25:05.983 07:33:44 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:25:05.983 07:33:44 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:25:05.983 07:33:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:05.983 07:33:44 accel -- common/autotest_common.sh@10 -- # set +x 00:25:05.983 ************************************ 00:25:05.983 START TEST accel_decomp_mcore 00:25:05.983 ************************************ 00:25:05.983 07:33:44 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:25:05.983 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:25:05.983 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:25:05.983 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:05.983 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:05.983 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:25:05.983 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:25:05.983 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:25:05.984 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:25:05.984 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:25:05.984 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:25:05.984 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:25:05.984 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:25:05.984 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:25:05.984 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:25:05.984 [2024-07-15 07:33:44.136876] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:25:05.984 [2024-07-15 07:33:44.137049] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65997 ] 00:25:05.984 [2024-07-15 07:33:44.304264] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:05.984 [2024-07-15 07:33:44.579433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:05.984 [2024-07-15 07:33:44.579589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:05.984 [2024-07-15 07:33:44.579747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:05.984 [2024-07-15 07:33:44.579992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:06.242 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:06.243 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:06.243 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:25:06.243 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:06.243 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:06.243 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:06.243 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:25:06.243 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:06.243 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:06.243 07:33:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:25:08.774 ************************************ 00:25:08.774 END TEST accel_decomp_mcore 00:25:08.774 ************************************ 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:08.774 00:25:08.774 real 0m2.766s 00:25:08.774 user 0m7.784s 00:25:08.774 sys 0m0.269s 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:08.774 07:33:46 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:25:08.774 07:33:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:25:08.774 07:33:46 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:25:08.774 07:33:46 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:25:08.774 07:33:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:08.774 07:33:46 accel -- common/autotest_common.sh@10 -- # set +x 00:25:08.774 ************************************ 00:25:08.774 START TEST accel_decomp_full_mcore 00:25:08.774 ************************************ 00:25:08.774 07:33:46 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:25:08.774 07:33:46 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:25:08.774 07:33:46 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:25:08.774 07:33:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:08.774 07:33:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:08.774 07:33:46 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:25:08.774 07:33:46 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:25:08.774 07:33:46 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:25:08.774 07:33:46 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:25:08.774 07:33:46 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:25:08.774 07:33:46 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:25:08.774 07:33:46 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:25:08.774 07:33:46 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:25:08.774 07:33:46 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:25:08.774 07:33:46 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:25:08.774 [2024-07-15 07:33:46.961545] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:25:08.774 [2024-07-15 07:33:46.961763] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66046 ] 00:25:08.774 [2024-07-15 07:33:47.141519] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:09.042 [2024-07-15 07:33:47.417114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:09.042 [2024-07-15 07:33:47.417261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:09.042 [2024-07-15 07:33:47.417376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:09.042 [2024-07-15 07:33:47.417639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:25:09.299 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:09.300 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:09.300 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:09.300 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:25:09.300 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:09.300 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:09.300 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:09.300 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:25:09.300 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:09.300 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:09.300 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:09.300 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:25:09.300 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:09.300 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:09.300 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:09.300 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:25:09.300 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:09.300 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:09.300 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:09.300 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:25:09.300 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:09.300 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:09.300 07:33:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:25:11.200 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:25:11.201 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:25:11.201 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:25:11.201 07:33:49 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:11.201 00:25:11.201 real 0m2.846s 00:25:11.201 user 0m7.996s 00:25:11.201 sys 0m0.309s 00:25:11.201 07:33:49 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:11.201 ************************************ 00:25:11.201 END TEST accel_decomp_full_mcore 00:25:11.201 ************************************ 00:25:11.201 07:33:49 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:25:11.201 07:33:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:25:11.201 07:33:49 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:25:11.201 07:33:49 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:25:11.201 07:33:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:11.201 07:33:49 accel -- common/autotest_common.sh@10 -- # set +x 00:25:11.201 ************************************ 00:25:11.201 START TEST accel_decomp_mthread 00:25:11.201 ************************************ 00:25:11.201 07:33:49 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:25:11.201 07:33:49 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:25:11.201 07:33:49 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:25:11.201 07:33:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:11.201 07:33:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:11.201 07:33:49 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:25:11.201 07:33:49 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:25:11.201 07:33:49 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:25:11.201 07:33:49 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:25:11.201 07:33:49 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:25:11.201 07:33:49 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:25:11.201 07:33:49 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:25:11.201 07:33:49 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:25:11.201 07:33:49 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:25:11.201 07:33:49 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:25:11.458 [2024-07-15 07:33:49.846814] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:25:11.458 [2024-07-15 07:33:49.846991] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66096 ] 00:25:11.458 [2024-07-15 07:33:50.019069] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.716 [2024-07-15 07:33:50.293838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:25:11.974 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:11.975 07:33:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:14.504 07:33:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:25:14.504 07:33:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:14.504 07:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:14.504 07:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:14.504 07:33:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:25:14.504 07:33:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:14.504 07:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:14.504 07:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:14.504 07:33:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:25:14.504 07:33:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:14.504 07:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:14.504 07:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:14.504 07:33:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:25:14.504 07:33:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:14.504 07:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:14.504 07:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:14.504 07:33:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:25:14.504 07:33:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:14.504 07:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:14.504 07:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:14.504 07:33:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:25:14.504 07:33:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:14.504 07:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:14.504 07:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:14.504 07:33:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:25:14.504 07:33:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:14.504 07:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:14.504 07:33:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:14.504 07:33:52 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:25:14.504 07:33:52 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:25:14.504 07:33:52 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:14.504 00:25:14.504 real 0m2.732s 00:25:14.504 user 0m2.404s 00:25:14.504 sys 0m0.230s 00:25:14.504 07:33:52 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:14.504 07:33:52 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:25:14.504 ************************************ 00:25:14.504 END TEST accel_decomp_mthread 00:25:14.504 ************************************ 00:25:14.504 07:33:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:25:14.504 07:33:52 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:25:14.504 07:33:52 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:25:14.504 07:33:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:14.504 07:33:52 accel -- common/autotest_common.sh@10 -- # set +x 00:25:14.504 ************************************ 00:25:14.504 START TEST accel_decomp_full_mthread 00:25:14.504 ************************************ 00:25:14.504 07:33:52 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:25:14.504 07:33:52 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:25:14.504 07:33:52 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:25:14.504 07:33:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:14.504 07:33:52 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:25:14.504 07:33:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:14.504 07:33:52 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:25:14.504 07:33:52 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:25:14.504 07:33:52 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:25:14.504 07:33:52 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:25:14.504 07:33:52 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:25:14.504 07:33:52 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:25:14.504 07:33:52 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:25:14.504 07:33:52 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:25:14.504 07:33:52 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:25:14.504 [2024-07-15 07:33:52.632325] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:25:14.504 [2024-07-15 07:33:52.632509] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66148 ] 00:25:14.504 [2024-07-15 07:33:52.802035] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.504 [2024-07-15 07:33:53.075143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:14.761 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:25:14.762 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:14.762 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:14.762 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:14.762 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:25:14.762 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:14.762 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:14.762 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:14.762 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:25:14.762 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:14.762 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:14.762 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:14.762 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:25:14.762 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:14.762 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:14.762 07:33:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:17.303 07:33:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:25:17.303 07:33:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:17.303 07:33:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:17.303 07:33:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:17.303 07:33:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:25:17.303 07:33:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:17.303 07:33:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:17.303 07:33:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:17.303 07:33:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:25:17.303 07:33:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:17.303 07:33:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:17.303 07:33:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:17.303 07:33:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:25:17.303 07:33:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:17.303 07:33:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:17.303 07:33:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:17.303 07:33:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:25:17.303 07:33:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:17.303 07:33:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:17.303 07:33:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:17.303 07:33:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:25:17.303 07:33:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:17.303 07:33:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:17.303 07:33:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:17.303 07:33:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:25:17.303 07:33:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:25:17.303 07:33:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:25:17.303 07:33:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:25:17.303 07:33:55 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:25:17.303 07:33:55 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:25:17.303 07:33:55 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:17.303 00:25:17.303 real 0m2.774s 00:25:17.303 user 0m2.454s 00:25:17.303 sys 0m0.223s 00:25:17.303 07:33:55 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:17.303 07:33:55 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:25:17.303 ************************************ 00:25:17.303 END TEST accel_decomp_full_mthread 00:25:17.303 ************************************ 00:25:17.303 07:33:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:25:17.303 07:33:55 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:25:17.303 07:33:55 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:25:17.303 07:33:55 accel -- accel/accel.sh@137 -- # build_accel_config 00:25:17.303 07:33:55 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:25:17.303 07:33:55 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:25:17.303 07:33:55 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:25:17.303 07:33:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:17.303 07:33:55 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:25:17.303 07:33:55 accel -- common/autotest_common.sh@10 -- # set +x 00:25:17.303 07:33:55 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:25:17.303 07:33:55 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:25:17.303 07:33:55 accel -- accel/accel.sh@40 -- # local IFS=, 00:25:17.303 07:33:55 accel -- accel/accel.sh@41 -- # jq -r . 00:25:17.303 ************************************ 00:25:17.303 START TEST accel_dif_functional_tests 00:25:17.303 ************************************ 00:25:17.303 07:33:55 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:25:17.303 [2024-07-15 07:33:55.498617] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:25:17.303 [2024-07-15 07:33:55.499012] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66196 ] 00:25:17.303 [2024-07-15 07:33:55.666134] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:17.561 [2024-07-15 07:33:55.939885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:17.561 [2024-07-15 07:33:55.940013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:17.561 [2024-07-15 07:33:55.940015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.818 00:25:17.818 00:25:17.818 CUnit - A unit testing framework for C - Version 2.1-3 00:25:17.818 http://cunit.sourceforge.net/ 00:25:17.818 00:25:17.818 00:25:17.818 Suite: accel_dif 00:25:17.818 Test: verify: DIF generated, GUARD check ...passed 00:25:17.818 Test: verify: DIF generated, APPTAG check ...passed 00:25:17.818 Test: verify: DIF generated, REFTAG check ...passed 00:25:17.818 Test: verify: DIF not generated, GUARD check ...[2024-07-15 07:33:56.316642] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:25:17.818 passed 00:25:17.818 Test: verify: DIF not generated, APPTAG check ...passed 00:25:17.818 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 07:33:56.316794] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:25:17.818 passed 00:25:17.818 Test: verify: APPTAG correct, APPTAG check ...[2024-07-15 07:33:56.316945] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:25:17.818 passed 00:25:17.818 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:25:17.818 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-07-15 07:33:56.317238] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:25:17.818 passed 00:25:17.818 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:25:17.818 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:25:17.818 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 07:33:56.317710] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:25:17.818 passed 00:25:17.818 Test: verify copy: DIF generated, GUARD check ...passed 00:25:17.818 Test: verify copy: DIF generated, APPTAG check ...passed 00:25:17.818 Test: verify copy: DIF generated, REFTAG check ...passed 00:25:17.819 Test: verify copy: DIF not generated, GUARD check ...passed 00:25:17.819 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 07:33:56.318419] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:25:17.819 passed 00:25:17.819 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 07:33:56.318600] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:25:17.819 passed 00:25:17.819 Test: generate copy: DIF generated, GUARD check ...[2024-07-15 07:33:56.318733] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:25:17.819 passed 00:25:17.819 Test: generate copy: DIF generated, APTTAG check ...passed 00:25:17.819 Test: generate copy: DIF generated, REFTAG check ...passed 00:25:17.819 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:25:17.819 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:25:17.819 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:25:17.819 Test: generate copy: iovecs-len validate ...passed 00:25:17.819 Test: generate copy: buffer alignment validate ...[2024-07-15 07:33:56.319556] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:25:17.819 passed 00:25:17.819 00:25:17.819 Run Summary: Type Total Ran Passed Failed Inactive 00:25:17.819 suites 1 1 n/a 0 0 00:25:17.819 tests 26 26 26 0 0 00:25:17.819 asserts 115 115 115 0 n/a 00:25:17.819 00:25:17.819 Elapsed time = 0.009 seconds 00:25:19.192 ************************************ 00:25:19.192 END TEST accel_dif_functional_tests 00:25:19.192 ************************************ 00:25:19.192 00:25:19.192 real 0m2.199s 00:25:19.192 user 0m4.164s 00:25:19.192 sys 0m0.330s 00:25:19.192 07:33:57 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:19.192 07:33:57 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:25:19.192 07:33:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:25:19.192 ************************************ 00:25:19.192 END TEST accel 00:25:19.192 ************************************ 00:25:19.192 00:25:19.192 real 1m6.939s 00:25:19.192 user 1m11.209s 00:25:19.192 sys 0m7.406s 00:25:19.192 07:33:57 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:19.192 07:33:57 accel -- common/autotest_common.sh@10 -- # set +x 00:25:19.192 07:33:57 -- common/autotest_common.sh@1142 -- # return 0 00:25:19.192 07:33:57 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:25:19.192 07:33:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:19.192 07:33:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:19.192 07:33:57 -- common/autotest_common.sh@10 -- # set +x 00:25:19.192 ************************************ 00:25:19.192 START TEST accel_rpc 00:25:19.192 ************************************ 00:25:19.192 07:33:57 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:25:19.192 * Looking for test storage... 00:25:19.192 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:25:19.192 07:33:57 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:25:19.192 07:33:57 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=66283 00:25:19.192 07:33:57 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:25:19.192 07:33:57 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 66283 00:25:19.192 07:33:57 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 66283 ']' 00:25:19.192 07:33:57 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:19.192 07:33:57 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:19.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:19.192 07:33:57 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:19.192 07:33:57 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:19.192 07:33:57 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:19.450 [2024-07-15 07:33:57.909541] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:25:19.450 [2024-07-15 07:33:57.909776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66283 ] 00:25:19.708 [2024-07-15 07:33:58.091538] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.966 [2024-07-15 07:33:58.386425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.531 07:33:58 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:20.531 07:33:58 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:25:20.531 07:33:58 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:25:20.531 07:33:58 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:25:20.531 07:33:58 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:25:20.531 07:33:58 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:25:20.531 07:33:58 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:25:20.531 07:33:58 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:20.531 07:33:58 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:20.531 07:33:58 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:20.531 ************************************ 00:25:20.531 START TEST accel_assign_opcode 00:25:20.531 ************************************ 00:25:20.531 07:33:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:25:20.531 07:33:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:25:20.531 07:33:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.531 07:33:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:25:20.531 [2024-07-15 07:33:58.855456] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:25:20.531 07:33:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.531 07:33:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:25:20.531 07:33:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.531 07:33:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:25:20.531 [2024-07-15 07:33:58.863448] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:25:20.531 07:33:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.531 07:33:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:25:20.531 07:33:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.531 07:33:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:25:21.465 07:33:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.465 07:33:59 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:25:21.465 07:33:59 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:25:21.465 07:33:59 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:25:21.465 07:33:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.465 07:33:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:25:21.465 07:33:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.465 software 00:25:21.465 ************************************ 00:25:21.465 END TEST accel_assign_opcode 00:25:21.465 ************************************ 00:25:21.465 00:25:21.465 real 0m0.966s 00:25:21.465 user 0m0.052s 00:25:21.465 sys 0m0.010s 00:25:21.465 07:33:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:21.465 07:33:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:25:21.465 07:33:59 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:25:21.465 07:33:59 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 66283 00:25:21.465 07:33:59 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 66283 ']' 00:25:21.465 07:33:59 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 66283 00:25:21.465 07:33:59 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:25:21.465 07:33:59 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:21.465 07:33:59 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66283 00:25:21.465 killing process with pid 66283 00:25:21.465 07:33:59 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:21.465 07:33:59 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:21.465 07:33:59 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66283' 00:25:21.465 07:33:59 accel_rpc -- common/autotest_common.sh@967 -- # kill 66283 00:25:21.465 07:33:59 accel_rpc -- common/autotest_common.sh@972 -- # wait 66283 00:25:23.994 00:25:23.994 real 0m4.677s 00:25:23.994 user 0m4.497s 00:25:23.994 sys 0m0.724s 00:25:23.994 ************************************ 00:25:23.994 END TEST accel_rpc 00:25:23.994 ************************************ 00:25:23.994 07:34:02 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:23.994 07:34:02 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:23.994 07:34:02 -- common/autotest_common.sh@1142 -- # return 0 00:25:23.994 07:34:02 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:25:23.994 07:34:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:23.994 07:34:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:23.994 07:34:02 -- common/autotest_common.sh@10 -- # set +x 00:25:23.994 ************************************ 00:25:23.994 START TEST app_cmdline 00:25:23.994 ************************************ 00:25:23.994 07:34:02 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:25:23.994 * Looking for test storage... 00:25:23.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:25:23.994 07:34:02 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:25:23.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:23.994 07:34:02 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=66405 00:25:23.994 07:34:02 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:25:23.994 07:34:02 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 66405 00:25:23.994 07:34:02 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 66405 ']' 00:25:23.994 07:34:02 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:23.994 07:34:02 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:23.994 07:34:02 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:23.994 07:34:02 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:23.994 07:34:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:25:24.289 [2024-07-15 07:34:02.633840] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:25:24.289 [2024-07-15 07:34:02.634297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66405 ] 00:25:24.289 [2024-07-15 07:34:02.815407] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.589 [2024-07-15 07:34:03.086780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.519 07:34:03 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:25.519 07:34:03 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:25:25.519 07:34:03 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:25:25.777 { 00:25:25.777 "version": "SPDK v24.09-pre git sha1 9c8eb396d", 00:25:25.777 "fields": { 00:25:25.777 "major": 24, 00:25:25.777 "minor": 9, 00:25:25.777 "patch": 0, 00:25:25.777 "suffix": "-pre", 00:25:25.777 "commit": "9c8eb396d" 00:25:25.777 } 00:25:25.777 } 00:25:25.777 07:34:04 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:25:25.777 07:34:04 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:25:25.777 07:34:04 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:25:25.777 07:34:04 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:25:25.777 07:34:04 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:25:25.777 07:34:04 app_cmdline -- app/cmdline.sh@26 -- # sort 00:25:25.777 07:34:04 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:25:25.777 07:34:04 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.777 07:34:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:25:25.777 07:34:04 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.777 07:34:04 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:25:25.777 07:34:04 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:25:25.777 07:34:04 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:25:25.777 07:34:04 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:25:25.777 07:34:04 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:25:25.777 07:34:04 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:25.777 07:34:04 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:25.777 07:34:04 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:25.777 07:34:04 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:25.777 07:34:04 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:25.777 07:34:04 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:25.777 07:34:04 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:25.777 07:34:04 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:25.777 07:34:04 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:25:26.034 request: 00:25:26.034 { 00:25:26.034 "method": "env_dpdk_get_mem_stats", 00:25:26.034 "req_id": 1 00:25:26.034 } 00:25:26.034 Got JSON-RPC error response 00:25:26.034 response: 00:25:26.034 { 00:25:26.034 "code": -32601, 00:25:26.034 "message": "Method not found" 00:25:26.034 } 00:25:26.034 07:34:04 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:25:26.034 07:34:04 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:26.034 07:34:04 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:26.034 07:34:04 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:26.034 07:34:04 app_cmdline -- app/cmdline.sh@1 -- # killprocess 66405 00:25:26.034 07:34:04 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 66405 ']' 00:25:26.034 07:34:04 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 66405 00:25:26.034 07:34:04 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:25:26.034 07:34:04 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:26.034 07:34:04 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66405 00:25:26.035 07:34:04 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:26.035 07:34:04 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:26.035 killing process with pid 66405 00:25:26.035 07:34:04 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66405' 00:25:26.035 07:34:04 app_cmdline -- common/autotest_common.sh@967 -- # kill 66405 00:25:26.035 07:34:04 app_cmdline -- common/autotest_common.sh@972 -- # wait 66405 00:25:28.606 00:25:28.606 real 0m4.604s 00:25:28.606 user 0m4.791s 00:25:28.606 sys 0m0.748s 00:25:28.606 07:34:07 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:28.606 ************************************ 00:25:28.606 END TEST app_cmdline 00:25:28.606 ************************************ 00:25:28.606 07:34:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:25:28.606 07:34:07 -- common/autotest_common.sh@1142 -- # return 0 00:25:28.606 07:34:07 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:25:28.606 07:34:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:28.606 07:34:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:28.606 07:34:07 -- common/autotest_common.sh@10 -- # set +x 00:25:28.606 ************************************ 00:25:28.606 START TEST version 00:25:28.606 ************************************ 00:25:28.606 07:34:07 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:25:28.606 * Looking for test storage... 00:25:28.606 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:25:28.606 07:34:07 version -- app/version.sh@17 -- # get_header_version major 00:25:28.606 07:34:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:25:28.606 07:34:07 version -- app/version.sh@14 -- # cut -f2 00:25:28.606 07:34:07 version -- app/version.sh@14 -- # tr -d '"' 00:25:28.606 07:34:07 version -- app/version.sh@17 -- # major=24 00:25:28.606 07:34:07 version -- app/version.sh@18 -- # get_header_version minor 00:25:28.606 07:34:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:25:28.606 07:34:07 version -- app/version.sh@14 -- # cut -f2 00:25:28.606 07:34:07 version -- app/version.sh@14 -- # tr -d '"' 00:25:28.606 07:34:07 version -- app/version.sh@18 -- # minor=9 00:25:28.606 07:34:07 version -- app/version.sh@19 -- # get_header_version patch 00:25:28.606 07:34:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:25:28.606 07:34:07 version -- app/version.sh@14 -- # tr -d '"' 00:25:28.606 07:34:07 version -- app/version.sh@14 -- # cut -f2 00:25:28.606 07:34:07 version -- app/version.sh@19 -- # patch=0 00:25:28.606 07:34:07 version -- app/version.sh@20 -- # get_header_version suffix 00:25:28.606 07:34:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:25:28.606 07:34:07 version -- app/version.sh@14 -- # cut -f2 00:25:28.606 07:34:07 version -- app/version.sh@14 -- # tr -d '"' 00:25:28.606 07:34:07 version -- app/version.sh@20 -- # suffix=-pre 00:25:28.606 07:34:07 version -- app/version.sh@22 -- # version=24.9 00:25:28.606 07:34:07 version -- app/version.sh@25 -- # (( patch != 0 )) 00:25:28.606 07:34:07 version -- app/version.sh@28 -- # version=24.9rc0 00:25:28.606 07:34:07 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:25:28.606 07:34:07 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:25:28.606 07:34:07 version -- app/version.sh@30 -- # py_version=24.9rc0 00:25:28.606 07:34:07 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:25:28.606 00:25:28.606 real 0m0.134s 00:25:28.606 user 0m0.080s 00:25:28.606 sys 0m0.087s 00:25:28.606 07:34:07 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:28.606 07:34:07 version -- common/autotest_common.sh@10 -- # set +x 00:25:28.606 ************************************ 00:25:28.606 END TEST version 00:25:28.606 ************************************ 00:25:28.864 07:34:07 -- common/autotest_common.sh@1142 -- # return 0 00:25:28.864 07:34:07 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:25:28.864 07:34:07 -- spdk/autotest.sh@198 -- # uname -s 00:25:28.864 07:34:07 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:25:28.864 07:34:07 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:25:28.864 07:34:07 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:25:28.864 07:34:07 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:25:28.864 07:34:07 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:25:28.864 07:34:07 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:28.864 07:34:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:28.864 07:34:07 -- common/autotest_common.sh@10 -- # set +x 00:25:28.864 ************************************ 00:25:28.864 START TEST blockdev_nvme 00:25:28.864 ************************************ 00:25:28.864 07:34:07 blockdev_nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:25:28.864 * Looking for test storage... 00:25:28.864 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:25:28.864 07:34:07 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:25:28.864 07:34:07 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:25:28.864 07:34:07 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:25:28.864 07:34:07 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:28.864 07:34:07 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:25:28.864 07:34:07 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:25:28.864 07:34:07 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:25:28.864 07:34:07 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:25:28.864 07:34:07 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:25:28.864 07:34:07 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:25:28.864 07:34:07 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:25:28.864 07:34:07 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:25:28.865 07:34:07 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:25:28.865 07:34:07 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:25:28.865 07:34:07 blockdev_nvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:25:28.865 07:34:07 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:25:28.865 07:34:07 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:25:28.865 07:34:07 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:25:28.865 07:34:07 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:25:28.865 07:34:07 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:25:28.865 07:34:07 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:25:28.865 07:34:07 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:25:28.865 07:34:07 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:25:28.865 07:34:07 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:25:28.865 07:34:07 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=66572 00:25:28.865 07:34:07 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:25:28.865 07:34:07 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:25:28.865 07:34:07 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 66572 00:25:28.865 07:34:07 blockdev_nvme -- common/autotest_common.sh@829 -- # '[' -z 66572 ']' 00:25:28.865 07:34:07 blockdev_nvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:28.865 07:34:07 blockdev_nvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:28.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:28.865 07:34:07 blockdev_nvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:28.865 07:34:07 blockdev_nvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:28.865 07:34:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:25:29.121 [2024-07-15 07:34:07.490169] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:25:29.121 [2024-07-15 07:34:07.490359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66572 ] 00:25:29.121 [2024-07-15 07:34:07.670891] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.379 [2024-07-15 07:34:07.975850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.348 07:34:08 blockdev_nvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:30.348 07:34:08 blockdev_nvme -- common/autotest_common.sh@862 -- # return 0 00:25:30.348 07:34:08 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:25:30.348 07:34:08 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:25:30.348 07:34:08 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:25:30.348 07:34:08 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:25:30.348 07:34:08 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:30.348 07:34:08 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:25:30.348 07:34:08 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.348 07:34:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:25:30.917 07:34:09 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.917 07:34:09 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:25:30.917 07:34:09 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.917 07:34:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:25:30.917 07:34:09 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.917 07:34:09 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:25:30.917 07:34:09 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:25:30.917 07:34:09 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.917 07:34:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:25:30.917 07:34:09 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.917 07:34:09 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:25:30.917 07:34:09 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.917 07:34:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:25:30.917 07:34:09 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.917 07:34:09 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:25:30.917 07:34:09 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.917 07:34:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:25:30.917 07:34:09 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.917 07:34:09 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:25:30.917 07:34:09 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:25:30.917 07:34:09 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:25:30.917 07:34:09 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.917 07:34:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:25:30.917 07:34:09 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.917 07:34:09 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:25:30.917 07:34:09 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:25:30.917 07:34:09 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "fce82ec2-f3ee-40ea-9328-61e362e442d4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "fce82ec2-f3ee-40ea-9328-61e362e442d4",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "c9c70313-a807-412f-bb8b-7c42a94e407b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "c9c70313-a807-412f-bb8b-7c42a94e407b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "a9aa62bc-a974-418f-a7fe-2b57325283e5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a9aa62bc-a974-418f-a7fe-2b57325283e5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "8f808575-6690-46ea-b692-a444856084a2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8f808575-6690-46ea-b692-a444856084a2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "67293f27-2646-4cb9-9967-bdaeca442c27"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "67293f27-2646-4cb9-9967-bdaeca442c27",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "54ff1a3a-7a8d-45a7-b73d-810407d06db2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "54ff1a3a-7a8d-45a7-b73d-810407d06db2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:25:30.917 07:34:09 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:25:30.917 07:34:09 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:25:30.917 07:34:09 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:25:30.917 07:34:09 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 66572 00:25:30.917 07:34:09 blockdev_nvme -- common/autotest_common.sh@948 -- # '[' -z 66572 ']' 00:25:30.917 07:34:09 blockdev_nvme -- common/autotest_common.sh@952 -- # kill -0 66572 00:25:30.917 07:34:09 blockdev_nvme -- common/autotest_common.sh@953 -- # uname 00:25:30.917 07:34:09 blockdev_nvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:30.917 07:34:09 blockdev_nvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66572 00:25:30.917 07:34:09 blockdev_nvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:30.917 07:34:09 blockdev_nvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:30.917 killing process with pid 66572 00:25:30.917 07:34:09 blockdev_nvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66572' 00:25:30.917 07:34:09 blockdev_nvme -- common/autotest_common.sh@967 -- # kill 66572 00:25:30.917 07:34:09 blockdev_nvme -- common/autotest_common.sh@972 -- # wait 66572 00:25:33.441 07:34:11 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:33.441 07:34:11 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:25:33.441 07:34:11 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:25:33.441 07:34:11 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:33.441 07:34:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:25:33.441 ************************************ 00:25:33.441 START TEST bdev_hello_world 00:25:33.441 ************************************ 00:25:33.441 07:34:11 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:25:33.699 [2024-07-15 07:34:12.104929] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:25:33.699 [2024-07-15 07:34:12.105125] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66678 ] 00:25:33.699 [2024-07-15 07:34:12.275825] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.957 [2024-07-15 07:34:12.555691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.892 [2024-07-15 07:34:13.257429] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:25:34.892 [2024-07-15 07:34:13.257532] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:25:34.892 [2024-07-15 07:34:13.257586] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:25:34.892 [2024-07-15 07:34:13.260936] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:25:34.892 [2024-07-15 07:34:13.261482] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:25:34.892 [2024-07-15 07:34:13.261531] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:25:34.892 [2024-07-15 07:34:13.261794] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:25:34.892 00:25:34.892 [2024-07-15 07:34:13.261842] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:25:36.267 00:25:36.267 real 0m2.507s 00:25:36.267 user 0m2.036s 00:25:36.267 sys 0m0.359s 00:25:36.267 07:34:14 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:36.267 07:34:14 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:25:36.267 ************************************ 00:25:36.267 END TEST bdev_hello_world 00:25:36.267 ************************************ 00:25:36.267 07:34:14 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:25:36.267 07:34:14 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:25:36.267 07:34:14 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:36.267 07:34:14 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:36.267 07:34:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:25:36.267 ************************************ 00:25:36.267 START TEST bdev_bounds 00:25:36.267 ************************************ 00:25:36.267 07:34:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:25:36.267 07:34:14 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=66720 00:25:36.267 07:34:14 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:25:36.267 07:34:14 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:25:36.267 07:34:14 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 66720' 00:25:36.267 Process bdevio pid: 66720 00:25:36.267 07:34:14 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 66720 00:25:36.267 07:34:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 66720 ']' 00:25:36.267 07:34:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.267 07:34:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:36.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:36.267 07:34:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.267 07:34:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:36.267 07:34:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:25:36.267 [2024-07-15 07:34:14.615212] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:25:36.267 [2024-07-15 07:34:14.615384] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66720 ] 00:25:36.267 [2024-07-15 07:34:14.782037] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:36.533 [2024-07-15 07:34:15.059244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.533 [2024-07-15 07:34:15.059359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.533 [2024-07-15 07:34:15.059397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:37.465 07:34:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:37.465 07:34:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:25:37.465 07:34:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:25:37.465 I/O targets: 00:25:37.465 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:25:37.465 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:25:37.465 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:25:37.465 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:25:37.465 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:25:37.465 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:25:37.465 00:25:37.465 00:25:37.465 CUnit - A unit testing framework for C - Version 2.1-3 00:25:37.465 http://cunit.sourceforge.net/ 00:25:37.465 00:25:37.465 00:25:37.465 Suite: bdevio tests on: Nvme3n1 00:25:37.465 Test: blockdev write read block ...passed 00:25:37.465 Test: blockdev write zeroes read block ...passed 00:25:37.465 Test: blockdev write zeroes read no split ...passed 00:25:37.465 Test: blockdev write zeroes read split ...passed 00:25:37.465 Test: blockdev write zeroes read split partial ...passed 00:25:37.465 Test: blockdev reset ...[2024-07-15 07:34:15.982778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:25:37.465 [2024-07-15 07:34:15.986871] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:37.465 passed 00:25:37.465 Test: blockdev write read 8 blocks ...passed 00:25:37.465 Test: blockdev write read size > 128k ...passed 00:25:37.465 Test: blockdev write read invalid size ...passed 00:25:37.465 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:37.465 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:37.465 Test: blockdev write read max offset ...passed 00:25:37.465 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:37.465 Test: blockdev writev readv 8 blocks ...passed 00:25:37.465 Test: blockdev writev readv 30 x 1block ...passed 00:25:37.465 Test: blockdev writev readv block ...passed 00:25:37.465 Test: blockdev writev readv size > 128k ...passed 00:25:37.465 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:37.465 Test: blockdev comparev and writev ...[2024-07-15 07:34:15.995048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26f00a000 len:0x1000 00:25:37.465 [2024-07-15 07:34:15.995134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:25:37.465 passed 00:25:37.465 Test: blockdev nvme passthru rw ...passed 00:25:37.465 Test: blockdev nvme passthru vendor specific ...[2024-07-15 07:34:15.996021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:25:37.465 [2024-07-15 07:34:15.996066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:25:37.465 passed 00:25:37.465 Test: blockdev nvme admin passthru ...passed 00:25:37.465 Test: blockdev copy ...passed 00:25:37.465 Suite: bdevio tests on: Nvme2n3 00:25:37.466 Test: blockdev write read block ...passed 00:25:37.466 Test: blockdev write zeroes read block ...passed 00:25:37.466 Test: blockdev write zeroes read no split ...passed 00:25:37.466 Test: blockdev write zeroes read split ...passed 00:25:37.466 Test: blockdev write zeroes read split partial ...passed 00:25:37.466 Test: blockdev reset ...[2024-07-15 07:34:16.073757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:25:37.723 [2024-07-15 07:34:16.078235] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:37.723 passed 00:25:37.723 Test: blockdev write read 8 blocks ...passed 00:25:37.723 Test: blockdev write read size > 128k ...passed 00:25:37.723 Test: blockdev write read invalid size ...passed 00:25:37.723 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:37.723 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:37.723 Test: blockdev write read max offset ...passed 00:25:37.723 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:37.723 Test: blockdev writev readv 8 blocks ...passed 00:25:37.723 Test: blockdev writev readv 30 x 1block ...passed 00:25:37.723 Test: blockdev writev readv block ...passed 00:25:37.723 Test: blockdev writev readv size > 128k ...passed 00:25:37.723 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:37.723 Test: blockdev comparev and writev ...[2024-07-15 07:34:16.087483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27e804000 len:0x1000 00:25:37.723 [2024-07-15 07:34:16.087543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:25:37.723 passed 00:25:37.723 Test: blockdev nvme passthru rw ...passed 00:25:37.723 Test: blockdev nvme passthru vendor specific ...passed 00:25:37.724 Test: blockdev nvme admin passthru ...[2024-07-15 07:34:16.088464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:25:37.724 [2024-07-15 07:34:16.088509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:25:37.724 passed 00:25:37.724 Test: blockdev copy ...passed 00:25:37.724 Suite: bdevio tests on: Nvme2n2 00:25:37.724 Test: blockdev write read block ...passed 00:25:37.724 Test: blockdev write zeroes read block ...passed 00:25:37.724 Test: blockdev write zeroes read no split ...passed 00:25:37.724 Test: blockdev write zeroes read split ...passed 00:25:37.724 Test: blockdev write zeroes read split partial ...passed 00:25:37.724 Test: blockdev reset ...[2024-07-15 07:34:16.165867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:25:37.724 [2024-07-15 07:34:16.170561] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:37.724 passed 00:25:37.724 Test: blockdev write read 8 blocks ...passed 00:25:37.724 Test: blockdev write read size > 128k ...passed 00:25:37.724 Test: blockdev write read invalid size ...passed 00:25:37.724 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:37.724 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:37.724 Test: blockdev write read max offset ...passed 00:25:37.724 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:37.724 Test: blockdev writev readv 8 blocks ...passed 00:25:37.724 Test: blockdev writev readv 30 x 1block ...passed 00:25:37.724 Test: blockdev writev readv block ...passed 00:25:37.724 Test: blockdev writev readv size > 128k ...passed 00:25:37.724 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:37.724 Test: blockdev comparev and writev ...[2024-07-15 07:34:16.180097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:25:37.724 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x27b23a000 len:0x1000 00:25:37.724 [2024-07-15 07:34:16.180313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:25:37.724 passed 00:25:37.724 Test: blockdev nvme passthru vendor specific ...[2024-07-15 07:34:16.181399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:25:37.724 [2024-07-15 07:34:16.181444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:25:37.724 passed 00:25:37.724 Test: blockdev nvme admin passthru ...passed 00:25:37.724 Test: blockdev copy ...passed 00:25:37.724 Suite: bdevio tests on: Nvme2n1 00:25:37.724 Test: blockdev write read block ...passed 00:25:37.724 Test: blockdev write zeroes read block ...passed 00:25:37.724 Test: blockdev write zeroes read no split ...passed 00:25:37.724 Test: blockdev write zeroes read split ...passed 00:25:37.724 Test: blockdev write zeroes read split partial ...passed 00:25:37.724 Test: blockdev reset ...[2024-07-15 07:34:16.248667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:25:37.724 [2024-07-15 07:34:16.253160] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:37.724 passed 00:25:37.724 Test: blockdev write read 8 blocks ...passed 00:25:37.724 Test: blockdev write read size > 128k ...passed 00:25:37.724 Test: blockdev write read invalid size ...passed 00:25:37.724 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:37.724 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:37.724 Test: blockdev write read max offset ...passed 00:25:37.724 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:37.724 Test: blockdev writev readv 8 blocks ...passed 00:25:37.724 Test: blockdev writev readv 30 x 1block ...passed 00:25:37.724 Test: blockdev writev readv block ...passed 00:25:37.724 Test: blockdev writev readv size > 128k ...passed 00:25:37.724 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:37.724 Test: blockdev comparev and writev ...[2024-07-15 07:34:16.262809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27b234000 len:0x1000 00:25:37.724 [2024-07-15 07:34:16.263047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:25:37.724 passed 00:25:37.724 Test: blockdev nvme passthru rw ...passed 00:25:37.724 Test: blockdev nvme passthru vendor specific ...[2024-07-15 07:34:16.264365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:25:37.724 [2024-07-15 07:34:16.264622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:25:37.724 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:25:37.724 passed 00:25:37.724 Test: blockdev copy ...passed 00:25:37.724 Suite: bdevio tests on: Nvme1n1 00:25:37.724 Test: blockdev write read block ...passed 00:25:37.724 Test: blockdev write zeroes read block ...passed 00:25:37.724 Test: blockdev write zeroes read no split ...passed 00:25:37.724 Test: blockdev write zeroes read split ...passed 00:25:37.724 Test: blockdev write zeroes read split partial ...passed 00:25:37.724 Test: blockdev reset ...[2024-07-15 07:34:16.328436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:25:37.724 [2024-07-15 07:34:16.332435] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:37.724 passed 00:25:37.724 Test: blockdev write read 8 blocks ...passed 00:25:37.724 Test: blockdev write read size > 128k ...passed 00:25:37.724 Test: blockdev write read invalid size ...passed 00:25:37.724 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:37.724 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:37.724 Test: blockdev write read max offset ...passed 00:25:37.982 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:37.982 Test: blockdev writev readv 8 blocks ...passed 00:25:37.982 Test: blockdev writev readv 30 x 1block ...passed 00:25:37.982 Test: blockdev writev readv block ...passed 00:25:37.982 Test: blockdev writev readv size > 128k ...passed 00:25:37.982 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:37.982 Test: blockdev comparev and writev ...[2024-07-15 07:34:16.341976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27b230000 len:0x1000 00:25:37.982 [2024-07-15 07:34:16.342044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:25:37.982 passed 00:25:37.982 Test: blockdev nvme passthru rw ...passed 00:25:37.982 Test: blockdev nvme passthru vendor specific ...passed 00:25:37.982 Test: blockdev nvme admin passthru ...[2024-07-15 07:34:16.342880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:25:37.982 [2024-07-15 07:34:16.342931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:25:37.982 passed 00:25:37.982 Test: blockdev copy ...passed 00:25:37.982 Suite: bdevio tests on: Nvme0n1 00:25:37.982 Test: blockdev write read block ...passed 00:25:37.982 Test: blockdev write zeroes read block ...passed 00:25:37.982 Test: blockdev write zeroes read no split ...passed 00:25:37.982 Test: blockdev write zeroes read split ...passed 00:25:37.982 Test: blockdev write zeroes read split partial ...passed 00:25:37.982 Test: blockdev reset ...[2024-07-15 07:34:16.408099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:25:37.982 [2024-07-15 07:34:16.411977] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:37.982 passed 00:25:37.982 Test: blockdev write read 8 blocks ...passed 00:25:37.982 Test: blockdev write read size > 128k ...passed 00:25:37.982 Test: blockdev write read invalid size ...passed 00:25:37.982 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:37.982 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:37.982 Test: blockdev write read max offset ...passed 00:25:37.982 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:37.982 Test: blockdev writev readv 8 blocks ...passed 00:25:37.982 Test: blockdev writev readv 30 x 1block ...passed 00:25:37.982 Test: blockdev writev readv block ...passed 00:25:37.982 Test: blockdev writev readv size > 128k ...passed 00:25:37.982 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:37.982 Test: blockdev comparev and writev ...passed 00:25:37.982 Test: blockdev nvme passthru rw ...[2024-07-15 07:34:16.420164] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:25:37.982 separate metadata which is not supported yet. 00:25:37.982 passed 00:25:37.982 Test: blockdev nvme passthru vendor specific ...passed 00:25:37.982 Test: blockdev nvme admin passthru ...[2024-07-15 07:34:16.420671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:25:37.982 [2024-07-15 07:34:16.420739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:25:37.982 passed 00:25:37.982 Test: blockdev copy ...passed 00:25:37.982 00:25:37.982 Run Summary: Type Total Ran Passed Failed Inactive 00:25:37.982 suites 6 6 n/a 0 0 00:25:37.982 tests 138 138 138 0 0 00:25:37.982 asserts 893 893 893 0 n/a 00:25:37.982 00:25:37.982 Elapsed time = 1.393 seconds 00:25:37.982 0 00:25:37.982 07:34:16 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 66720 00:25:37.982 07:34:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 66720 ']' 00:25:37.982 07:34:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 66720 00:25:37.982 07:34:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:25:37.982 07:34:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:37.982 07:34:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66720 00:25:37.982 07:34:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:37.982 07:34:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:37.982 07:34:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66720' 00:25:37.982 killing process with pid 66720 00:25:37.982 07:34:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 66720 00:25:37.982 07:34:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 66720 00:25:39.352 07:34:17 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:25:39.352 00:25:39.352 real 0m3.102s 00:25:39.352 user 0m7.460s 00:25:39.352 sys 0m0.487s 00:25:39.352 07:34:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:39.352 07:34:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:25:39.352 ************************************ 00:25:39.352 END TEST bdev_bounds 00:25:39.352 ************************************ 00:25:39.352 07:34:17 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:25:39.353 07:34:17 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:25:39.353 07:34:17 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:25:39.353 07:34:17 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:39.353 07:34:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:25:39.353 ************************************ 00:25:39.353 START TEST bdev_nbd 00:25:39.353 ************************************ 00:25:39.353 07:34:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:25:39.353 07:34:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:25:39.353 07:34:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:25:39.353 07:34:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:39.353 07:34:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:39.353 07:34:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:25:39.353 07:34:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:25:39.353 07:34:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:25:39.353 07:34:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:25:39.353 07:34:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:25:39.353 07:34:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:25:39.353 07:34:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:25:39.353 07:34:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:25:39.353 07:34:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:25:39.353 07:34:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:25:39.353 07:34:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:25:39.353 07:34:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=66790 00:25:39.353 07:34:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:25:39.353 07:34:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 66790 /var/tmp/spdk-nbd.sock 00:25:39.353 07:34:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 66790 ']' 00:25:39.353 07:34:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:25:39.353 07:34:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:39.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:25:39.353 07:34:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:25:39.353 07:34:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:25:39.353 07:34:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:39.353 07:34:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:25:39.353 [2024-07-15 07:34:17.782096] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:25:39.353 [2024-07-15 07:34:17.782263] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:39.353 [2024-07-15 07:34:17.951618] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.656 [2024-07-15 07:34:18.225794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.590 07:34:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:40.590 07:34:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:25:40.590 07:34:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:25:40.590 07:34:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:40.590 07:34:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:25:40.590 07:34:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:25:40.590 07:34:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:25:40.590 07:34:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:40.590 07:34:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:25:40.590 07:34:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:25:40.590 07:34:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:25:40.590 07:34:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:25:40.590 07:34:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:25:40.590 07:34:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:25:40.590 07:34:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:25:40.848 07:34:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:25:40.849 07:34:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:25:40.849 07:34:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:25:40.849 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:25:40.849 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:40.849 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:40.849 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:40.849 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:25:40.849 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:40.849 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:40.849 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:40.849 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:40.849 1+0 records in 00:25:40.849 1+0 records out 00:25:40.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000804636 s, 5.1 MB/s 00:25:40.849 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:40.849 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:40.849 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:40.849 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:40.849 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:40.849 07:34:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:40.849 07:34:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:25:40.849 07:34:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:25:41.107 07:34:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:25:41.107 07:34:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:25:41.107 07:34:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:25:41.107 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:25:41.107 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:41.107 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:41.107 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:41.107 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:25:41.107 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:41.107 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:41.107 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:41.107 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:41.107 1+0 records in 00:25:41.107 1+0 records out 00:25:41.107 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000647508 s, 6.3 MB/s 00:25:41.107 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:41.107 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:41.107 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:41.107 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:41.107 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:41.107 07:34:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:41.107 07:34:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:25:41.107 07:34:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:25:41.365 07:34:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:25:41.365 07:34:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:25:41.365 07:34:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:25:41.365 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:25:41.365 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:41.365 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:41.365 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:41.365 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:25:41.365 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:41.365 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:41.365 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:41.365 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:41.365 1+0 records in 00:25:41.365 1+0 records out 00:25:41.365 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000783078 s, 5.2 MB/s 00:25:41.365 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:41.365 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:41.365 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:41.365 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:41.365 07:34:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:41.366 07:34:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:41.366 07:34:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:25:41.366 07:34:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:25:41.623 07:34:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:25:41.623 07:34:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:25:41.623 07:34:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:25:41.623 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:25:41.623 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:41.623 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:41.623 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:41.623 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:25:41.623 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:41.623 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:41.623 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:41.623 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:41.623 1+0 records in 00:25:41.623 1+0 records out 00:25:41.623 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00063966 s, 6.4 MB/s 00:25:41.623 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:41.623 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:41.623 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:41.623 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:41.623 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:41.623 07:34:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:41.623 07:34:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:25:41.623 07:34:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:25:41.881 07:34:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:25:41.881 07:34:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:25:41.881 07:34:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:25:41.881 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:25:41.881 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:41.881 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:41.881 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:41.881 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:25:41.881 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:41.882 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:41.882 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:41.882 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:41.882 1+0 records in 00:25:41.882 1+0 records out 00:25:41.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000656231 s, 6.2 MB/s 00:25:41.882 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:41.882 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:41.882 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:41.882 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:41.882 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:41.882 07:34:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:41.882 07:34:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:25:41.882 07:34:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:25:42.140 07:34:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:25:42.140 07:34:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:25:42.140 07:34:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:25:42.140 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:25:42.140 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:42.140 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:42.140 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:42.140 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:25:42.140 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:42.140 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:42.140 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:42.140 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:42.140 1+0 records in 00:25:42.140 1+0 records out 00:25:42.140 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00075319 s, 5.4 MB/s 00:25:42.140 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:42.140 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:42.140 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:42.140 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:42.140 07:34:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:42.140 07:34:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:42.140 07:34:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:25:42.140 07:34:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:42.399 07:34:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:25:42.399 { 00:25:42.399 "nbd_device": "/dev/nbd0", 00:25:42.399 "bdev_name": "Nvme0n1" 00:25:42.399 }, 00:25:42.399 { 00:25:42.399 "nbd_device": "/dev/nbd1", 00:25:42.399 "bdev_name": "Nvme1n1" 00:25:42.399 }, 00:25:42.399 { 00:25:42.399 "nbd_device": "/dev/nbd2", 00:25:42.399 "bdev_name": "Nvme2n1" 00:25:42.399 }, 00:25:42.399 { 00:25:42.399 "nbd_device": "/dev/nbd3", 00:25:42.399 "bdev_name": "Nvme2n2" 00:25:42.399 }, 00:25:42.399 { 00:25:42.399 "nbd_device": "/dev/nbd4", 00:25:42.399 "bdev_name": "Nvme2n3" 00:25:42.399 }, 00:25:42.399 { 00:25:42.399 "nbd_device": "/dev/nbd5", 00:25:42.399 "bdev_name": "Nvme3n1" 00:25:42.399 } 00:25:42.399 ]' 00:25:42.399 07:34:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:25:42.399 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:25:42.399 { 00:25:42.399 "nbd_device": "/dev/nbd0", 00:25:42.399 "bdev_name": "Nvme0n1" 00:25:42.399 }, 00:25:42.399 { 00:25:42.399 "nbd_device": "/dev/nbd1", 00:25:42.399 "bdev_name": "Nvme1n1" 00:25:42.399 }, 00:25:42.399 { 00:25:42.399 "nbd_device": "/dev/nbd2", 00:25:42.399 "bdev_name": "Nvme2n1" 00:25:42.399 }, 00:25:42.399 { 00:25:42.399 "nbd_device": "/dev/nbd3", 00:25:42.399 "bdev_name": "Nvme2n2" 00:25:42.399 }, 00:25:42.399 { 00:25:42.399 "nbd_device": "/dev/nbd4", 00:25:42.399 "bdev_name": "Nvme2n3" 00:25:42.399 }, 00:25:42.399 { 00:25:42.399 "nbd_device": "/dev/nbd5", 00:25:42.399 "bdev_name": "Nvme3n1" 00:25:42.399 } 00:25:42.399 ]' 00:25:42.399 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:25:42.679 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:25:42.679 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:42.679 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:25:42.679 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:42.679 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:25:42.679 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:42.679 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:42.939 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:42.939 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:42.939 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:42.939 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:42.939 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:42.939 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:42.939 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:42.939 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:42.939 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:42.939 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:25:43.198 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:43.198 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:43.198 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:43.198 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:43.198 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:43.198 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:43.198 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:43.198 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:43.198 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:43.198 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:25:43.456 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:25:43.456 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:25:43.456 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:25:43.456 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:43.456 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:43.456 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:25:43.456 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:43.456 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:43.456 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:43.456 07:34:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:25:43.715 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:25:43.715 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:25:43.715 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:25:43.715 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:43.715 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:43.715 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:25:43.715 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:43.715 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:43.715 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:43.715 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:25:43.973 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:25:43.973 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:25:43.973 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:25:43.973 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:43.973 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:43.973 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:25:43.973 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:43.973 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:43.973 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:43.973 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:25:44.231 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:25:44.231 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:25:44.231 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:25:44.231 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:44.231 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:44.231 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:25:44.231 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:44.231 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:44.231 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:44.231 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:44.231 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:44.490 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:25:44.490 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:25:44.490 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:44.490 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:25:44.490 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:25:44.490 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:44.490 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:25:44.490 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:25:44.490 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:25:44.490 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:25:44.490 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:25:44.490 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:25:44.490 07:34:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:25:44.490 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:44.490 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:25:44.490 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:25:44.490 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:25:44.490 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:25:44.490 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:25:44.490 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:44.490 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:25:44.490 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:44.490 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:25:44.490 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:44.490 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:25:44.490 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:44.490 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:25:44.490 07:34:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:25:44.748 /dev/nbd0 00:25:44.749 07:34:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:44.749 07:34:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:44.749 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:25:44.749 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:44.749 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:44.749 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:44.749 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:25:44.749 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:44.749 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:44.749 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:44.749 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:44.749 1+0 records in 00:25:44.749 1+0 records out 00:25:44.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000870886 s, 4.7 MB/s 00:25:44.749 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:44.749 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:44.749 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:44.749 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:44.749 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:44.749 07:34:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:44.749 07:34:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:25:44.749 07:34:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:25:45.006 /dev/nbd1 00:25:45.006 07:34:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:45.006 07:34:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:45.006 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:25:45.006 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:45.006 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:45.006 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:45.006 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:25:45.006 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:45.006 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:45.006 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:45.006 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:45.006 1+0 records in 00:25:45.006 1+0 records out 00:25:45.006 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494131 s, 8.3 MB/s 00:25:45.006 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:45.006 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:45.006 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:45.006 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:45.006 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:45.006 07:34:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:45.006 07:34:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:25:45.006 07:34:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:25:45.265 /dev/nbd10 00:25:45.265 07:34:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:25:45.265 07:34:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:25:45.265 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:25:45.265 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:45.265 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:45.265 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:45.265 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:25:45.265 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:45.265 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:45.265 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:45.265 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:45.265 1+0 records in 00:25:45.265 1+0 records out 00:25:45.265 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000683707 s, 6.0 MB/s 00:25:45.265 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:45.265 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:45.265 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:45.265 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:45.265 07:34:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:45.265 07:34:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:45.265 07:34:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:25:45.265 07:34:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:25:45.567 /dev/nbd11 00:25:45.567 07:34:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:25:45.567 07:34:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:25:45.567 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:25:45.567 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:45.567 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:45.567 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:45.567 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:25:45.567 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:45.567 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:45.567 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:45.567 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:45.567 1+0 records in 00:25:45.567 1+0 records out 00:25:45.567 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00062117 s, 6.6 MB/s 00:25:45.567 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:45.567 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:45.567 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:45.567 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:45.567 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:45.567 07:34:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:45.567 07:34:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:25:45.567 07:34:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:25:45.824 /dev/nbd12 00:25:45.824 07:34:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:25:45.824 07:34:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:25:45.824 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:25:45.824 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:45.824 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:45.825 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:45.825 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:25:45.825 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:45.825 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:45.825 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:45.825 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:45.825 1+0 records in 00:25:45.825 1+0 records out 00:25:45.825 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00339071 s, 1.2 MB/s 00:25:45.825 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:45.825 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:45.825 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:45.825 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:45.825 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:45.825 07:34:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:45.825 07:34:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:25:45.825 07:34:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:25:46.082 /dev/nbd13 00:25:46.341 07:34:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:25:46.341 07:34:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:25:46.341 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:25:46.341 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:46.341 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:46.341 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:46.341 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:25:46.341 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:46.341 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:46.341 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:46.341 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:46.341 1+0 records in 00:25:46.341 1+0 records out 00:25:46.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000805571 s, 5.1 MB/s 00:25:46.341 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:46.341 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:46.341 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:46.341 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:46.341 07:34:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:46.341 07:34:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:46.341 07:34:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:25:46.341 07:34:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:46.341 07:34:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:46.341 07:34:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:46.600 07:34:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:25:46.600 { 00:25:46.600 "nbd_device": "/dev/nbd0", 00:25:46.600 "bdev_name": "Nvme0n1" 00:25:46.600 }, 00:25:46.600 { 00:25:46.600 "nbd_device": "/dev/nbd1", 00:25:46.600 "bdev_name": "Nvme1n1" 00:25:46.600 }, 00:25:46.600 { 00:25:46.600 "nbd_device": "/dev/nbd10", 00:25:46.600 "bdev_name": "Nvme2n1" 00:25:46.600 }, 00:25:46.600 { 00:25:46.600 "nbd_device": "/dev/nbd11", 00:25:46.600 "bdev_name": "Nvme2n2" 00:25:46.600 }, 00:25:46.600 { 00:25:46.600 "nbd_device": "/dev/nbd12", 00:25:46.600 "bdev_name": "Nvme2n3" 00:25:46.600 }, 00:25:46.600 { 00:25:46.600 "nbd_device": "/dev/nbd13", 00:25:46.600 "bdev_name": "Nvme3n1" 00:25:46.600 } 00:25:46.600 ]' 00:25:46.600 07:34:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:25:46.600 { 00:25:46.600 "nbd_device": "/dev/nbd0", 00:25:46.600 "bdev_name": "Nvme0n1" 00:25:46.600 }, 00:25:46.600 { 00:25:46.600 "nbd_device": "/dev/nbd1", 00:25:46.600 "bdev_name": "Nvme1n1" 00:25:46.600 }, 00:25:46.600 { 00:25:46.600 "nbd_device": "/dev/nbd10", 00:25:46.600 "bdev_name": "Nvme2n1" 00:25:46.600 }, 00:25:46.601 { 00:25:46.601 "nbd_device": "/dev/nbd11", 00:25:46.601 "bdev_name": "Nvme2n2" 00:25:46.601 }, 00:25:46.601 { 00:25:46.601 "nbd_device": "/dev/nbd12", 00:25:46.601 "bdev_name": "Nvme2n3" 00:25:46.601 }, 00:25:46.601 { 00:25:46.601 "nbd_device": "/dev/nbd13", 00:25:46.601 "bdev_name": "Nvme3n1" 00:25:46.601 } 00:25:46.601 ]' 00:25:46.601 07:34:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:46.601 07:34:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:25:46.601 /dev/nbd1 00:25:46.601 /dev/nbd10 00:25:46.601 /dev/nbd11 00:25:46.601 /dev/nbd12 00:25:46.601 /dev/nbd13' 00:25:46.601 07:34:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:25:46.601 /dev/nbd1 00:25:46.601 /dev/nbd10 00:25:46.601 /dev/nbd11 00:25:46.601 /dev/nbd12 00:25:46.601 /dev/nbd13' 00:25:46.601 07:34:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:46.601 07:34:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:25:46.601 07:34:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:25:46.601 07:34:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:25:46.601 07:34:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:25:46.601 07:34:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:25:46.601 07:34:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:25:46.601 07:34:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:25:46.601 07:34:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:25:46.601 07:34:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:25:46.601 07:34:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:25:46.601 07:34:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:25:46.601 256+0 records in 00:25:46.601 256+0 records out 00:25:46.601 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00972968 s, 108 MB/s 00:25:46.601 07:34:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:46.601 07:34:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:25:46.859 256+0 records in 00:25:46.859 256+0 records out 00:25:46.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155327 s, 6.8 MB/s 00:25:46.860 07:34:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:46.860 07:34:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:25:46.860 256+0 records in 00:25:46.860 256+0 records out 00:25:46.860 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.161076 s, 6.5 MB/s 00:25:46.860 07:34:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:46.860 07:34:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:25:47.118 256+0 records in 00:25:47.118 256+0 records out 00:25:47.118 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157195 s, 6.7 MB/s 00:25:47.118 07:34:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:47.118 07:34:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:25:47.377 256+0 records in 00:25:47.377 256+0 records out 00:25:47.377 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131454 s, 8.0 MB/s 00:25:47.377 07:34:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:47.377 07:34:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:25:47.377 256+0 records in 00:25:47.377 256+0 records out 00:25:47.377 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.16501 s, 6.4 MB/s 00:25:47.377 07:34:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:47.377 07:34:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:25:47.635 256+0 records in 00:25:47.635 256+0 records out 00:25:47.635 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153576 s, 6.8 MB/s 00:25:47.635 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:25:47.635 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:25:47.635 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:25:47.635 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:25:47.635 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:25:47.635 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:25:47.635 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:25:47.635 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:47.635 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:25:47.635 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:47.635 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:25:47.635 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:47.635 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:25:47.635 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:47.635 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:25:47.635 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:47.635 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:25:47.635 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:47.635 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:25:47.635 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:25:47.635 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:25:47.635 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:47.635 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:25:47.635 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:47.635 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:25:47.635 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:47.635 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:47.894 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:47.894 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:47.894 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:47.894 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:47.894 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:47.894 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:47.894 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:47.894 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:47.894 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:47.894 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:25:48.153 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:48.153 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:48.153 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:48.153 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:48.153 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:48.153 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:48.153 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:48.153 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:48.153 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:48.153 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:25:48.412 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:25:48.412 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:25:48.412 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:25:48.412 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:48.412 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:48.412 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:25:48.412 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:48.412 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:48.412 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:48.412 07:34:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:25:48.670 07:34:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:25:48.670 07:34:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:25:48.670 07:34:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:25:48.670 07:34:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:48.670 07:34:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:48.670 07:34:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:25:48.670 07:34:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:48.670 07:34:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:48.670 07:34:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:48.670 07:34:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:25:48.928 07:34:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:25:48.928 07:34:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:25:48.928 07:34:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:25:48.928 07:34:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:48.928 07:34:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:48.928 07:34:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:25:48.928 07:34:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:48.928 07:34:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:48.928 07:34:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:48.928 07:34:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:25:49.186 07:34:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:25:49.186 07:34:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:25:49.186 07:34:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:25:49.186 07:34:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:49.186 07:34:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:49.186 07:34:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:25:49.186 07:34:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:49.186 07:34:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:49.186 07:34:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:49.186 07:34:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:49.186 07:34:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:49.445 07:34:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:25:49.445 07:34:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:25:49.445 07:34:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:49.703 07:34:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:25:49.703 07:34:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:49.703 07:34:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:25:49.703 07:34:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:25:49.703 07:34:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:25:49.703 07:34:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:25:49.703 07:34:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:25:49.703 07:34:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:25:49.703 07:34:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:25:49.703 07:34:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:25:49.703 07:34:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:49.703 07:34:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:25:49.703 07:34:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:25:49.703 07:34:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:25:49.703 07:34:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:25:49.961 malloc_lvol_verify 00:25:49.961 07:34:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:25:50.219 d77e4dd5-9951-49dd-81c3-fb886fc07a2f 00:25:50.219 07:34:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:25:50.477 afbd1e13-2fce-493b-8a31-8794992b04a3 00:25:50.477 07:34:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:25:50.751 /dev/nbd0 00:25:50.751 07:34:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:25:50.751 mke2fs 1.46.5 (30-Dec-2021) 00:25:50.751 Discarding device blocks: 0/4096 done 00:25:50.751 Creating filesystem with 4096 1k blocks and 1024 inodes 00:25:50.751 00:25:50.751 Allocating group tables: 0/1 done 00:25:50.751 Writing inode tables: 0/1 done 00:25:50.751 Creating journal (1024 blocks): done 00:25:50.751 Writing superblocks and filesystem accounting information: 0/1 done 00:25:50.751 00:25:50.751 07:34:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:25:50.751 07:34:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:25:50.751 07:34:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:50.751 07:34:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:50.751 07:34:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:50.751 07:34:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:25:50.751 07:34:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:50.751 07:34:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:51.009 07:34:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:51.009 07:34:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:51.009 07:34:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:51.009 07:34:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:51.009 07:34:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:51.009 07:34:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:51.009 07:34:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:51.009 07:34:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:51.009 07:34:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:25:51.009 07:34:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:25:51.009 07:34:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 66790 00:25:51.009 07:34:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 66790 ']' 00:25:51.009 07:34:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 66790 00:25:51.009 07:34:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:25:51.009 07:34:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:51.009 07:34:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66790 00:25:51.009 07:34:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:51.009 07:34:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:51.009 killing process with pid 66790 00:25:51.009 07:34:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66790' 00:25:51.009 07:34:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 66790 00:25:51.009 07:34:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 66790 00:25:52.383 07:34:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:25:52.383 00:25:52.383 real 0m13.134s 00:25:52.383 user 0m18.252s 00:25:52.383 sys 0m4.313s 00:25:52.383 07:34:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:52.383 07:34:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:25:52.383 ************************************ 00:25:52.383 END TEST bdev_nbd 00:25:52.383 ************************************ 00:25:52.383 07:34:30 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:25:52.383 07:34:30 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:25:52.383 07:34:30 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:25:52.383 skipping fio tests on NVMe due to multi-ns failures. 00:25:52.383 07:34:30 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:25:52.383 07:34:30 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:52.383 07:34:30 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:25:52.383 07:34:30 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:25:52.383 07:34:30 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:52.383 07:34:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:25:52.383 ************************************ 00:25:52.383 START TEST bdev_verify 00:25:52.383 ************************************ 00:25:52.383 07:34:30 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:25:52.383 [2024-07-15 07:34:30.990849] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:25:52.384 [2024-07-15 07:34:30.991064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67192 ] 00:25:52.642 [2024-07-15 07:34:31.175362] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:52.900 [2024-07-15 07:34:31.480323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.900 [2024-07-15 07:34:31.480342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:53.835 Running I/O for 5 seconds... 00:25:59.101 00:25:59.101 Latency(us) 00:25:59.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.101 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:59.101 Verification LBA range: start 0x0 length 0xbd0bd 00:25:59.101 Nvme0n1 : 5.06 1416.03 5.53 0.00 0.00 90206.53 12571.00 92465.34 00:25:59.101 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:59.101 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:25:59.101 Nvme0n1 : 5.09 1458.00 5.70 0.00 0.00 87597.46 13643.40 82456.20 00:25:59.101 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:59.101 Verification LBA range: start 0x0 length 0xa0000 00:25:59.101 Nvme1n1 : 5.06 1415.57 5.53 0.00 0.00 90104.44 12630.57 86269.21 00:25:59.101 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:59.101 Verification LBA range: start 0xa0000 length 0xa0000 00:25:59.101 Nvme1n1 : 5.09 1457.48 5.69 0.00 0.00 87376.63 14000.87 80549.70 00:25:59.101 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:59.101 Verification LBA range: start 0x0 length 0x80000 00:25:59.101 Nvme2n1 : 5.07 1415.13 5.53 0.00 0.00 89986.23 12213.53 80549.70 00:25:59.101 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:59.101 Verification LBA range: start 0x80000 length 0x80000 00:25:59.101 Nvme2n1 : 5.10 1456.97 5.69 0.00 0.00 87203.39 13226.36 77689.95 00:25:59.101 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:59.101 Verification LBA range: start 0x0 length 0x80000 00:25:59.101 Nvme2n2 : 5.07 1414.67 5.53 0.00 0.00 89855.99 12332.68 76736.70 00:25:59.101 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:59.101 Verification LBA range: start 0x80000 length 0x80000 00:25:59.101 Nvme2n2 : 5.10 1456.45 5.69 0.00 0.00 87069.64 13583.83 77213.32 00:25:59.101 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:59.101 Verification LBA range: start 0x0 length 0x80000 00:25:59.101 Nvme2n3 : 5.07 1414.22 5.52 0.00 0.00 89711.51 12690.15 83886.08 00:25:59.101 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:59.101 Verification LBA range: start 0x80000 length 0x80000 00:25:59.101 Nvme2n3 : 5.10 1455.93 5.69 0.00 0.00 86922.02 13822.14 79119.83 00:25:59.101 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:59.101 Verification LBA range: start 0x0 length 0x20000 00:25:59.101 Nvme3n1 : 5.07 1413.77 5.52 0.00 0.00 89562.90 12571.00 92465.34 00:25:59.101 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:59.101 Verification LBA range: start 0x20000 length 0x20000 00:25:59.101 Nvme3n1 : 5.10 1455.41 5.69 0.00 0.00 86825.02 10783.65 81502.95 00:25:59.101 =================================================================================================================== 00:25:59.101 Total : 17229.64 67.30 0.00 0.00 88511.12 10783.65 92465.34 00:26:00.479 00:26:00.479 real 0m8.069s 00:26:00.479 user 0m14.401s 00:26:00.479 sys 0m0.401s 00:26:00.479 07:34:38 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:00.479 07:34:38 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:26:00.479 ************************************ 00:26:00.479 END TEST bdev_verify 00:26:00.479 ************************************ 00:26:00.479 07:34:38 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:26:00.479 07:34:38 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:00.479 07:34:38 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:26:00.479 07:34:38 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:00.479 07:34:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:26:00.479 ************************************ 00:26:00.479 START TEST bdev_verify_big_io 00:26:00.479 ************************************ 00:26:00.479 07:34:38 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:00.479 [2024-07-15 07:34:39.088867] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:26:00.479 [2024-07-15 07:34:39.089060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67296 ] 00:26:00.738 [2024-07-15 07:34:39.260218] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:00.996 [2024-07-15 07:34:39.541510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.996 [2024-07-15 07:34:39.541520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:01.932 Running I/O for 5 seconds... 00:26:08.493 00:26:08.493 Latency(us) 00:26:08.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.493 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:08.493 Verification LBA range: start 0x0 length 0xbd0b 00:26:08.493 Nvme0n1 : 5.53 138.96 8.69 0.00 0.00 889187.61 34555.35 804543.77 00:26:08.493 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:08.493 Verification LBA range: start 0xbd0b length 0xbd0b 00:26:08.493 Nvme0n1 : 5.73 129.47 8.09 0.00 0.00 946349.07 19065.02 983754.94 00:26:08.493 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:08.493 Verification LBA range: start 0x0 length 0xa000 00:26:08.493 Nvme1n1 : 5.68 138.09 8.63 0.00 0.00 855711.09 90558.84 854112.81 00:26:08.493 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:08.493 Verification LBA range: start 0xa000 length 0xa000 00:26:08.493 Nvme1n1 : 5.73 130.29 8.14 0.00 0.00 916622.21 99138.09 915120.87 00:26:08.493 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:08.493 Verification LBA range: start 0x0 length 0x8000 00:26:08.493 Nvme2n1 : 5.72 145.56 9.10 0.00 0.00 810409.35 33602.09 842673.80 00:26:08.493 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:08.493 Verification LBA range: start 0x8000 length 0x8000 00:26:08.493 Nvme2n1 : 5.74 123.78 7.74 0.00 0.00 938041.19 91988.71 1403185.34 00:26:08.493 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:08.493 Verification LBA range: start 0x0 length 0x8000 00:26:08.493 Nvme2n2 : 5.74 152.68 9.54 0.00 0.00 763124.01 8102.63 842673.80 00:26:08.493 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:08.493 Verification LBA range: start 0x8000 length 0x8000 00:26:08.493 Nvme2n2 : 5.80 136.53 8.53 0.00 0.00 832051.74 17635.14 1624339.55 00:26:08.493 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:08.493 Verification LBA range: start 0x0 length 0x8000 00:26:08.493 Nvme2n3 : 5.74 152.28 9.52 0.00 0.00 747885.94 8460.10 846486.81 00:26:08.493 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:08.493 Verification LBA range: start 0x8000 length 0x8000 00:26:08.493 Nvme2n3 : 5.80 140.07 8.75 0.00 0.00 789800.88 13941.29 1441315.37 00:26:08.493 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:08.494 Verification LBA range: start 0x0 length 0x2000 00:26:08.494 Nvme3n1 : 5.74 156.03 9.75 0.00 0.00 714829.03 7864.32 846486.81 00:26:08.494 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:08.494 Verification LBA range: start 0x2000 length 0x2000 00:26:08.494 Nvme3n1 : 5.85 163.15 10.20 0.00 0.00 667568.31 1154.33 1662469.59 00:26:08.494 =================================================================================================================== 00:26:08.494 Total : 1706.87 106.68 0.00 0.00 815441.38 1154.33 1662469.59 00:26:09.869 00:26:09.869 real 0m9.153s 00:26:09.869 user 0m16.614s 00:26:09.869 sys 0m0.425s 00:26:09.869 07:34:48 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:09.869 07:34:48 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:26:09.869 ************************************ 00:26:09.869 END TEST bdev_verify_big_io 00:26:09.869 ************************************ 00:26:09.869 07:34:48 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:26:09.869 07:34:48 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:09.869 07:34:48 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:26:09.869 07:34:48 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:09.869 07:34:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:26:09.869 ************************************ 00:26:09.869 START TEST bdev_write_zeroes 00:26:09.869 ************************************ 00:26:09.869 07:34:48 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:09.869 [2024-07-15 07:34:48.293644] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:26:09.869 [2024-07-15 07:34:48.293815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67418 ] 00:26:09.869 [2024-07-15 07:34:48.463471] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.127 [2024-07-15 07:34:48.738977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.062 Running I/O for 1 seconds... 00:26:11.994 00:26:11.994 Latency(us) 00:26:11.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.994 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:11.994 Nvme0n1 : 1.01 8527.59 33.31 0.00 0.00 14957.08 9115.46 17992.61 00:26:11.994 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:11.994 Nvme1n1 : 1.01 8513.99 33.26 0.00 0.00 14952.81 9234.62 17635.14 00:26:11.994 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:11.994 Nvme2n1 : 1.02 8501.14 33.21 0.00 0.00 14946.42 9472.93 17515.99 00:26:11.994 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:11.994 Nvme2n2 : 1.02 8538.44 33.35 0.00 0.00 14897.39 8877.15 17992.61 00:26:11.994 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:11.994 Nvme2n3 : 1.02 8525.27 33.30 0.00 0.00 14874.87 7596.22 17992.61 00:26:11.994 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:11.994 Nvme3n1 : 1.02 8511.98 33.25 0.00 0.00 14869.01 7149.38 17873.45 00:26:11.994 =================================================================================================================== 00:26:11.994 Total : 51118.41 199.68 0.00 0.00 14916.13 7149.38 17992.61 00:26:13.366 00:26:13.366 real 0m3.775s 00:26:13.366 user 0m3.295s 00:26:13.366 sys 0m0.354s 00:26:13.366 07:34:51 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:13.366 07:34:51 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:26:13.366 ************************************ 00:26:13.366 END TEST bdev_write_zeroes 00:26:13.366 ************************************ 00:26:13.624 07:34:52 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:26:13.624 07:34:52 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:13.624 07:34:52 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:26:13.624 07:34:52 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:13.624 07:34:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:26:13.624 ************************************ 00:26:13.624 START TEST bdev_json_nonenclosed 00:26:13.624 ************************************ 00:26:13.624 07:34:52 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:13.624 [2024-07-15 07:34:52.124862] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:26:13.624 [2024-07-15 07:34:52.125098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67477 ] 00:26:13.945 [2024-07-15 07:34:52.292652] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.203 [2024-07-15 07:34:52.575154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.203 [2024-07-15 07:34:52.575328] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:26:14.203 [2024-07-15 07:34:52.575371] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:26:14.203 [2024-07-15 07:34:52.575400] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:14.461 00:26:14.461 real 0m1.023s 00:26:14.461 user 0m0.761s 00:26:14.461 sys 0m0.155s 00:26:14.461 07:34:53 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:26:14.461 07:34:53 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:14.461 07:34:53 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:26:14.461 ************************************ 00:26:14.461 END TEST bdev_json_nonenclosed 00:26:14.461 ************************************ 00:26:14.719 07:34:53 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:26:14.719 07:34:53 blockdev_nvme -- bdev/blockdev.sh@782 -- # true 00:26:14.719 07:34:53 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:14.719 07:34:53 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:26:14.719 07:34:53 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:14.719 07:34:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:26:14.719 ************************************ 00:26:14.719 START TEST bdev_json_nonarray 00:26:14.719 ************************************ 00:26:14.719 07:34:53 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:14.719 [2024-07-15 07:34:53.220901] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:26:14.719 [2024-07-15 07:34:53.221113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67508 ] 00:26:14.976 [2024-07-15 07:34:53.402710] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.234 [2024-07-15 07:34:53.699163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.234 [2024-07-15 07:34:53.699317] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:26:15.234 [2024-07-15 07:34:53.699347] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:26:15.234 [2024-07-15 07:34:53.699368] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:15.799 00:26:15.799 real 0m1.081s 00:26:15.799 user 0m0.788s 00:26:15.799 sys 0m0.186s 00:26:15.799 07:34:54 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:26:15.799 07:34:54 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:15.799 ************************************ 00:26:15.799 END TEST bdev_json_nonarray 00:26:15.799 ************************************ 00:26:15.799 07:34:54 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:26:15.799 07:34:54 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:26:15.799 07:34:54 blockdev_nvme -- bdev/blockdev.sh@785 -- # true 00:26:15.799 07:34:54 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:26:15.799 07:34:54 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:26:15.799 07:34:54 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:26:15.799 07:34:54 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:26:15.799 07:34:54 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:26:15.799 07:34:54 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:26:15.799 07:34:54 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:15.799 07:34:54 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:26:15.799 07:34:54 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:26:15.799 07:34:54 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:26:15.799 07:34:54 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:26:15.799 00:26:15.799 real 0m46.982s 00:26:15.799 user 1m8.397s 00:26:15.799 sys 0m7.762s 00:26:15.799 07:34:54 blockdev_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:15.799 07:34:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:26:15.799 ************************************ 00:26:15.799 END TEST blockdev_nvme 00:26:15.799 ************************************ 00:26:15.799 07:34:54 -- common/autotest_common.sh@1142 -- # return 0 00:26:15.799 07:34:54 -- spdk/autotest.sh@213 -- # uname -s 00:26:15.799 07:34:54 -- spdk/autotest.sh@213 -- # [[ Linux == Linux ]] 00:26:15.799 07:34:54 -- spdk/autotest.sh@214 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:26:15.799 07:34:54 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:15.799 07:34:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:15.799 07:34:54 -- common/autotest_common.sh@10 -- # set +x 00:26:15.799 ************************************ 00:26:15.799 START TEST blockdev_nvme_gpt 00:26:15.799 ************************************ 00:26:15.799 07:34:54 blockdev_nvme_gpt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:26:15.799 * Looking for test storage... 00:26:15.799 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:26:15.800 07:34:54 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:26:15.800 07:34:54 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:26:15.800 07:34:54 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:26:15.800 07:34:54 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:15.800 07:34:54 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:26:15.800 07:34:54 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:26:15.800 07:34:54 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:26:15.800 07:34:54 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:26:15.800 07:34:54 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:26:15.800 07:34:54 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:26:15.800 07:34:54 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:26:15.800 07:34:54 blockdev_nvme_gpt -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:26:15.800 07:34:54 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # uname -s 00:26:15.800 07:34:54 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:26:15.800 07:34:54 blockdev_nvme_gpt -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:26:15.800 07:34:54 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # test_type=gpt 00:26:15.800 07:34:54 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # crypto_device= 00:26:15.800 07:34:54 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # dek= 00:26:15.800 07:34:54 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # env_ctx= 00:26:15.800 07:34:54 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:26:15.800 07:34:54 blockdev_nvme_gpt -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:26:15.800 07:34:54 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:26:15.800 07:34:54 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:26:15.800 07:34:54 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:26:15.800 07:34:54 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=67584 00:26:15.800 07:34:54 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:26:15.800 07:34:54 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 67584 00:26:15.800 07:34:54 blockdev_nvme_gpt -- common/autotest_common.sh@829 -- # '[' -z 67584 ']' 00:26:15.800 07:34:54 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:26:15.800 07:34:54 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:15.800 07:34:54 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:15.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:15.800 07:34:54 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:15.800 07:34:54 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:15.800 07:34:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:26:16.058 [2024-07-15 07:34:54.526173] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:26:16.058 [2024-07-15 07:34:54.526368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67584 ] 00:26:16.315 [2024-07-15 07:34:54.703711] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.574 [2024-07-15 07:34:54.979516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.519 07:34:55 blockdev_nvme_gpt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:17.519 07:34:55 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # return 0 00:26:17.519 07:34:55 blockdev_nvme_gpt -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:26:17.519 07:34:55 blockdev_nvme_gpt -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:26:17.519 07:34:55 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:17.778 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:18.037 Waiting for block devices as requested 00:26:18.037 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:18.037 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:18.296 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:26:18.296 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:26:23.562 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:26:23.562 07:35:01 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # local nvme bdf 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:26:23.562 07:35:01 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:23.562 07:35:01 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:10.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:11.0/nvme/nvme0/nvme0n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n2' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n3' '/sys/bus/pci/drivers/nvme/0000:00:13.0/nvme/nvme3/nvme3c3n1') 00:26:23.562 07:35:01 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:26:23.562 07:35:01 blockdev_nvme_gpt -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:26:23.562 07:35:01 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:26:23.562 07:35:01 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:26:23.562 07:35:01 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # dev=/dev/nvme1n1 00:26:23.562 07:35:01 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # parted /dev/nvme1n1 -ms print 00:26:23.562 07:35:01 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme1n1: unrecognised disk label 00:26:23.562 BYT; 00:26:23.562 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:26:23.562 07:35:01 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme1n1: unrecognised disk label 00:26:23.562 BYT; 00:26:23.562 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\1\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:26:23.562 07:35:01 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme1n1 00:26:23.562 07:35:01 blockdev_nvme_gpt -- bdev/blockdev.sh@116 -- # break 00:26:23.562 07:35:01 blockdev_nvme_gpt -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme1n1 ]] 00:26:23.562 07:35:01 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:26:23.562 07:35:01 blockdev_nvme_gpt -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:26:23.562 07:35:01 blockdev_nvme_gpt -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme1n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:26:23.562 07:35:01 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:26:23.562 07:35:01 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:26:23.562 07:35:01 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:26:23.562 07:35:01 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:26:23.562 07:35:01 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:26:23.562 07:35:01 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:26:23.562 07:35:01 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:26:23.562 07:35:01 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:26:23.562 07:35:01 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:26:23.562 07:35:01 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:26:23.562 07:35:01 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:26:23.562 07:35:01 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:26:23.562 07:35:01 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:26:23.562 07:35:01 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:26:23.562 07:35:01 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:26:23.562 07:35:01 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:26:23.562 07:35:01 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:26:23.562 07:35:01 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:26:23.562 07:35:01 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:26:23.562 07:35:01 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:26:23.562 07:35:01 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:26:23.562 07:35:01 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:26:23.562 07:35:01 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme1n1 00:26:24.498 The operation has completed successfully. 00:26:24.498 07:35:02 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme1n1 00:26:25.430 The operation has completed successfully. 00:26:25.430 07:35:03 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:25.996 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:26.626 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:26:26.626 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:26:26.626 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:26:26.626 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:26:26.626 07:35:05 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:26:26.626 07:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.626 07:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:26:26.626 [] 00:26:26.626 07:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.626 07:35:05 blockdev_nvme_gpt -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:26:26.626 07:35:05 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:26:26.626 07:35:05 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:26:26.626 07:35:05 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:26.627 07:35:05 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:26:26.627 07:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.627 07:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:26:26.885 07:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.885 07:35:05 blockdev_nvme_gpt -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:26:26.885 07:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.885 07:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:26:27.145 07:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.145 07:35:05 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # cat 00:26:27.145 07:35:05 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:26:27.145 07:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.145 07:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:26:27.145 07:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.145 07:35:05 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:26:27.145 07:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.145 07:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:26:27.145 07:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.145 07:35:05 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:26:27.145 07:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.145 07:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:26:27.145 07:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.145 07:35:05 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:26:27.145 07:35:05 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:26:27.145 07:35:05 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:26:27.145 07:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.145 07:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:26:27.145 07:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.145 07:35:05 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:26:27.145 07:35:05 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # jq -r .name 00:26:27.145 07:35:05 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "f00ade94-e2dc-43f5-8a1d-6d439a25c325"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f00ade94-e2dc-43f5-8a1d-6d439a25c325",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "668cf43e-f353-49d8-bfae-8162ee02ac9b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "668cf43e-f353-49d8-bfae-8162ee02ac9b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "4ecbe070-ffd9-4784-b30d-3b0313e71702"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4ecbe070-ffd9-4784-b30d-3b0313e71702",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "b8964aac-03c9-47f4-8e75-f6a70398a8a7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b8964aac-03c9-47f4-8e75-f6a70398a8a7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "6fd7b99b-b3d4-4717-a9f7-48218d835400"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "6fd7b99b-b3d4-4717-a9f7-48218d835400",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:26:27.145 07:35:05 blockdev_nvme_gpt -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:26:27.145 07:35:05 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:26:27.145 07:35:05 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:26:27.145 07:35:05 blockdev_nvme_gpt -- bdev/blockdev.sh@754 -- # killprocess 67584 00:26:27.145 07:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@948 -- # '[' -z 67584 ']' 00:26:27.145 07:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # kill -0 67584 00:26:27.145 07:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # uname 00:26:27.145 07:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:27.145 07:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67584 00:26:27.404 killing process with pid 67584 00:26:27.404 07:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:27.404 07:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:27.404 07:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67584' 00:26:27.404 07:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@967 -- # kill 67584 00:26:27.404 07:35:05 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # wait 67584 00:26:29.939 07:35:08 blockdev_nvme_gpt -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:29.939 07:35:08 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:26:29.939 07:35:08 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:26:29.939 07:35:08 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:29.939 07:35:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:26:29.939 ************************************ 00:26:29.939 START TEST bdev_hello_world 00:26:29.939 ************************************ 00:26:29.939 07:35:08 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:26:29.939 [2024-07-15 07:35:08.324322] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:26:29.939 [2024-07-15 07:35:08.324539] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68227 ] 00:26:29.939 [2024-07-15 07:35:08.504427] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.196 [2024-07-15 07:35:08.786854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.127 [2024-07-15 07:35:09.483673] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:26:31.127 [2024-07-15 07:35:09.483752] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:26:31.127 [2024-07-15 07:35:09.483784] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:26:31.127 [2024-07-15 07:35:09.487140] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:26:31.127 [2024-07-15 07:35:09.487536] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:26:31.127 [2024-07-15 07:35:09.487571] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:26:31.127 [2024-07-15 07:35:09.487870] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:26:31.127 00:26:31.127 [2024-07-15 07:35:09.487910] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:26:32.499 00:26:32.499 real 0m2.603s 00:26:32.499 user 0m2.115s 00:26:32.499 sys 0m0.374s 00:26:32.499 07:35:10 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:32.499 ************************************ 00:26:32.499 END TEST bdev_hello_world 00:26:32.499 ************************************ 00:26:32.499 07:35:10 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:26:32.499 07:35:10 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:26:32.499 07:35:10 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:26:32.499 07:35:10 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:32.499 07:35:10 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:32.499 07:35:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:26:32.499 ************************************ 00:26:32.499 START TEST bdev_bounds 00:26:32.499 ************************************ 00:26:32.499 07:35:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:26:32.499 Process bdevio pid: 68275 00:26:32.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.499 07:35:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=68275 00:26:32.499 07:35:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:26:32.499 07:35:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:26:32.499 07:35:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 68275' 00:26:32.499 07:35:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 68275 00:26:32.499 07:35:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 68275 ']' 00:26:32.499 07:35:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.499 07:35:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:32.499 07:35:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.499 07:35:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:32.499 07:35:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:26:32.499 [2024-07-15 07:35:10.972247] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:26:32.499 [2024-07-15 07:35:10.972484] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68275 ] 00:26:32.758 [2024-07-15 07:35:11.152908] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:33.016 [2024-07-15 07:35:11.435469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:33.016 [2024-07-15 07:35:11.435584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.016 [2024-07-15 07:35:11.435632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:33.582 07:35:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:33.582 07:35:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:26:33.582 07:35:12 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:26:33.840 I/O targets: 00:26:33.840 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:26:33.840 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:26:33.840 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:26:33.840 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:26:33.840 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:26:33.840 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:26:33.840 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:26:33.840 00:26:33.840 00:26:33.840 CUnit - A unit testing framework for C - Version 2.1-3 00:26:33.840 http://cunit.sourceforge.net/ 00:26:33.840 00:26:33.840 00:26:33.840 Suite: bdevio tests on: Nvme3n1 00:26:33.840 Test: blockdev write read block ...passed 00:26:33.840 Test: blockdev write zeroes read block ...passed 00:26:33.840 Test: blockdev write zeroes read no split ...passed 00:26:33.840 Test: blockdev write zeroes read split ...passed 00:26:33.840 Test: blockdev write zeroes read split partial ...passed 00:26:33.840 Test: blockdev reset ...[2024-07-15 07:35:12.386936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:26:33.840 [2024-07-15 07:35:12.391326] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:33.840 passed 00:26:33.840 Test: blockdev write read 8 blocks ...passed 00:26:33.840 Test: blockdev write read size > 128k ...passed 00:26:33.840 Test: blockdev write read invalid size ...passed 00:26:33.840 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:33.840 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:33.840 Test: blockdev write read max offset ...passed 00:26:33.840 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:33.840 Test: blockdev writev readv 8 blocks ...passed 00:26:33.840 Test: blockdev writev readv 30 x 1block ...passed 00:26:33.840 Test: blockdev writev readv block ...passed 00:26:33.840 Test: blockdev writev readv size > 128k ...passed 00:26:33.840 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:33.840 Test: blockdev comparev and writev ...[2024-07-15 07:35:12.400533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x274804000 len:0x1000 00:26:33.840 passed 00:26:33.840 Test: blockdev nvme passthru rw ...[2024-07-15 07:35:12.400818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:26:33.840 passed 00:26:33.840 Test: blockdev nvme passthru vendor specific ...[2024-07-15 07:35:12.401811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:26:33.840 [2024-07-15 07:35:12.401939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:26:33.840 passed 00:26:33.840 Test: blockdev nvme admin passthru ...passed 00:26:33.840 Test: blockdev copy ...passed 00:26:33.840 Suite: bdevio tests on: Nvme2n3 00:26:33.840 Test: blockdev write read block ...passed 00:26:33.840 Test: blockdev write zeroes read block ...passed 00:26:33.840 Test: blockdev write zeroes read no split ...passed 00:26:33.840 Test: blockdev write zeroes read split ...passed 00:26:34.098 Test: blockdev write zeroes read split partial ...passed 00:26:34.098 Test: blockdev reset ...[2024-07-15 07:35:12.482969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:26:34.098 [2024-07-15 07:35:12.487890] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:34.098 passed 00:26:34.098 Test: blockdev write read 8 blocks ...passed 00:26:34.098 Test: blockdev write read size > 128k ...passed 00:26:34.098 Test: blockdev write read invalid size ...passed 00:26:34.098 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:34.098 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:34.098 Test: blockdev write read max offset ...passed 00:26:34.098 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:34.098 Test: blockdev writev readv 8 blocks ...passed 00:26:34.098 Test: blockdev writev readv 30 x 1block ...passed 00:26:34.098 Test: blockdev writev readv block ...passed 00:26:34.098 Test: blockdev writev readv size > 128k ...passed 00:26:34.098 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:34.098 Test: blockdev comparev and writev ...[2024-07-15 07:35:12.496922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x283e3a000 len:0x1000 00:26:34.098 passed 00:26:34.098 Test: blockdev nvme passthru rw ...[2024-07-15 07:35:12.497247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:26:34.098 passed 00:26:34.098 Test: blockdev nvme passthru vendor specific ...[2024-07-15 07:35:12.498235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:26:34.098 passed 00:26:34.098 Test: blockdev nvme admin passthru ...[2024-07-15 07:35:12.498482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:26:34.098 passed 00:26:34.098 Test: blockdev copy ...passed 00:26:34.098 Suite: bdevio tests on: Nvme2n2 00:26:34.098 Test: blockdev write read block ...passed 00:26:34.098 Test: blockdev write zeroes read block ...passed 00:26:34.098 Test: blockdev write zeroes read no split ...passed 00:26:34.098 Test: blockdev write zeroes read split ...passed 00:26:34.098 Test: blockdev write zeroes read split partial ...passed 00:26:34.098 Test: blockdev reset ...[2024-07-15 07:35:12.595048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:26:34.098 [2024-07-15 07:35:12.599745] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:34.098 passed 00:26:34.098 Test: blockdev write read 8 blocks ...passed 00:26:34.098 Test: blockdev write read size > 128k ...passed 00:26:34.098 Test: blockdev write read invalid size ...passed 00:26:34.098 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:34.098 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:34.098 Test: blockdev write read max offset ...passed 00:26:34.098 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:34.098 Test: blockdev writev readv 8 blocks ...passed 00:26:34.098 Test: blockdev writev readv 30 x 1block ...passed 00:26:34.098 Test: blockdev writev readv block ...passed 00:26:34.098 Test: blockdev writev readv size > 128k ...passed 00:26:34.098 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:34.098 Test: blockdev comparev and writev ...[2024-07-15 07:35:12.608909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x283e36000 len:0x1000 00:26:34.098 passed 00:26:34.098 Test: blockdev nvme passthru rw ...[2024-07-15 07:35:12.609221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:26:34.098 passed 00:26:34.098 Test: blockdev nvme passthru vendor specific ...[2024-07-15 07:35:12.610133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:26:34.098 passed 00:26:34.098 Test: blockdev nvme admin passthru ...[2024-07-15 07:35:12.610357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:26:34.098 passed 00:26:34.098 Test: blockdev copy ...passed 00:26:34.098 Suite: bdevio tests on: Nvme2n1 00:26:34.098 Test: blockdev write read block ...passed 00:26:34.098 Test: blockdev write zeroes read block ...passed 00:26:34.098 Test: blockdev write zeroes read no split ...passed 00:26:34.098 Test: blockdev write zeroes read split ...passed 00:26:34.098 Test: blockdev write zeroes read split partial ...passed 00:26:34.098 Test: blockdev reset ...[2024-07-15 07:35:12.688549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:26:34.098 [2024-07-15 07:35:12.692910] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:34.098 passed 00:26:34.098 Test: blockdev write read 8 blocks ...passed 00:26:34.098 Test: blockdev write read size > 128k ...passed 00:26:34.098 Test: blockdev write read invalid size ...passed 00:26:34.098 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:34.098 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:34.098 Test: blockdev write read max offset ...passed 00:26:34.098 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:34.098 Test: blockdev writev readv 8 blocks ...passed 00:26:34.098 Test: blockdev writev readv 30 x 1block ...passed 00:26:34.098 Test: blockdev writev readv block ...passed 00:26:34.098 Test: blockdev writev readv size > 128k ...passed 00:26:34.098 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:34.098 Test: blockdev comparev and writev ...[2024-07-15 07:35:12.701930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x283e30000 len:0x1000 00:26:34.098 passed 00:26:34.098 Test: blockdev nvme passthru rw ...[2024-07-15 07:35:12.702239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:26:34.098 passed 00:26:34.098 Test: blockdev nvme passthru vendor specific ...[2024-07-15 07:35:12.703136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:26:34.098 [2024-07-15 07:35:12.703259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:26:34.098 passed 00:26:34.098 Test: blockdev nvme admin passthru ...passed 00:26:34.098 Test: blockdev copy ...passed 00:26:34.098 Suite: bdevio tests on: Nvme1n1 00:26:34.098 Test: blockdev write read block ...passed 00:26:34.357 Test: blockdev write zeroes read block ...passed 00:26:34.357 Test: blockdev write zeroes read no split ...passed 00:26:34.357 Test: blockdev write zeroes read split ...passed 00:26:34.357 Test: blockdev write zeroes read split partial ...passed 00:26:34.357 Test: blockdev reset ...[2024-07-15 07:35:12.774731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:26:34.357 [2024-07-15 07:35:12.778810] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:34.357 passed 00:26:34.357 Test: blockdev write read 8 blocks ...passed 00:26:34.357 Test: blockdev write read size > 128k ...passed 00:26:34.357 Test: blockdev write read invalid size ...passed 00:26:34.357 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:34.357 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:34.357 Test: blockdev write read max offset ...passed 00:26:34.357 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:34.357 Test: blockdev writev readv 8 blocks ...passed 00:26:34.357 Test: blockdev writev readv 30 x 1block ...passed 00:26:34.357 Test: blockdev writev readv block ...passed 00:26:34.357 Test: blockdev writev readv size > 128k ...passed 00:26:34.357 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:34.357 Test: blockdev comparev and writev ...[2024-07-15 07:35:12.787964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27b40e000 len:0x1000 00:26:34.357 passed 00:26:34.357 Test: blockdev nvme passthru rw ...[2024-07-15 07:35:12.788280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:26:34.357 passed 00:26:34.357 Test: blockdev nvme passthru vendor specific ...[2024-07-15 07:35:12.789137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:26:34.357 passed 00:26:34.357 Test: blockdev nvme admin passthru ...[2024-07-15 07:35:12.789387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:26:34.357 passed 00:26:34.357 Test: blockdev copy ...passed 00:26:34.357 Suite: bdevio tests on: Nvme0n1p2 00:26:34.357 Test: blockdev write read block ...passed 00:26:34.357 Test: blockdev write zeroes read block ...passed 00:26:34.357 Test: blockdev write zeroes read no split ...passed 00:26:34.357 Test: blockdev write zeroes read split ...passed 00:26:34.357 Test: blockdev write zeroes read split partial ...passed 00:26:34.357 Test: blockdev reset ...[2024-07-15 07:35:12.870830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:26:34.357 [2024-07-15 07:35:12.874791] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:34.357 passed 00:26:34.357 Test: blockdev write read 8 blocks ...passed 00:26:34.357 Test: blockdev write read size > 128k ...passed 00:26:34.357 Test: blockdev write read invalid size ...passed 00:26:34.357 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:34.357 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:34.357 Test: blockdev write read max offset ...passed 00:26:34.357 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:34.357 Test: blockdev writev readv 8 blocks ...passed 00:26:34.357 Test: blockdev writev readv 30 x 1block ...passed 00:26:34.357 Test: blockdev writev readv block ...passed 00:26:34.357 Test: blockdev writev readv size > 128k ...passed 00:26:34.357 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:34.357 Test: blockdev comparev and writev ...passed 00:26:34.357 Test: blockdev nvme passthru rw ...passed 00:26:34.357 Test: blockdev nvme passthru vendor specific ...passed 00:26:34.357 Test: blockdev nvme admin passthru ...passed 00:26:34.357 Test: blockdev copy ...[2024-07-15 07:35:12.883812] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:26:34.357 separate metadata which is not supported yet. 00:26:34.357 passed 00:26:34.357 Suite: bdevio tests on: Nvme0n1p1 00:26:34.357 Test: blockdev write read block ...passed 00:26:34.357 Test: blockdev write zeroes read block ...passed 00:26:34.357 Test: blockdev write zeroes read no split ...passed 00:26:34.357 Test: blockdev write zeroes read split ...passed 00:26:34.357 Test: blockdev write zeroes read split partial ...passed 00:26:34.357 Test: blockdev reset ...[2024-07-15 07:35:12.952384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:26:34.357 [2024-07-15 07:35:12.956280] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:34.357 passed 00:26:34.357 Test: blockdev write read 8 blocks ...passed 00:26:34.357 Test: blockdev write read size > 128k ...passed 00:26:34.357 Test: blockdev write read invalid size ...passed 00:26:34.357 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:34.357 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:34.357 Test: blockdev write read max offset ...passed 00:26:34.357 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:34.357 Test: blockdev writev readv 8 blocks ...passed 00:26:34.357 Test: blockdev writev readv 30 x 1block ...passed 00:26:34.357 Test: blockdev writev readv block ...passed 00:26:34.357 Test: blockdev writev readv size > 128k ...passed 00:26:34.357 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:34.357 Test: blockdev comparev and writev ...passed 00:26:34.357 Test: blockdev nvme passthru rw ...passed 00:26:34.357 Test: blockdev nvme passthru vendor specific ...passed 00:26:34.357 Test: blockdev nvme admin passthru ...passed 00:26:34.357 Test: blockdev copy ...[2024-07-15 07:35:12.963666] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:26:34.357 separate metadata which is not supported yet. 00:26:34.357 passed 00:26:34.357 00:26:34.357 Run Summary: Type Total Ran Passed Failed Inactive 00:26:34.357 suites 7 7 n/a 0 0 00:26:34.357 tests 161 161 161 0 0 00:26:34.357 asserts 1006 1006 1006 0 n/a 00:26:34.357 00:26:34.357 Elapsed time = 1.801 seconds 00:26:34.615 0 00:26:34.615 07:35:12 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 68275 00:26:34.615 07:35:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 68275 ']' 00:26:34.615 07:35:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 68275 00:26:34.615 07:35:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:26:34.615 07:35:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:34.615 07:35:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68275 00:26:34.615 07:35:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:34.615 07:35:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:34.615 07:35:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68275' 00:26:34.615 killing process with pid 68275 00:26:34.615 07:35:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@967 -- # kill 68275 00:26:34.615 07:35:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # wait 68275 00:26:36.059 07:35:14 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:26:36.059 00:26:36.059 real 0m3.475s 00:26:36.059 user 0m8.554s 00:26:36.059 sys 0m0.550s 00:26:36.059 07:35:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:36.059 07:35:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:26:36.059 ************************************ 00:26:36.059 END TEST bdev_bounds 00:26:36.059 ************************************ 00:26:36.059 07:35:14 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:26:36.059 07:35:14 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:26:36.059 07:35:14 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:26:36.059 07:35:14 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:36.059 07:35:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:26:36.059 ************************************ 00:26:36.059 START TEST bdev_nbd 00:26:36.059 ************************************ 00:26:36.059 07:35:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:26:36.059 07:35:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:26:36.059 07:35:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:26:36.059 07:35:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:36.059 07:35:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:36.059 07:35:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:26:36.059 07:35:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:26:36.059 07:35:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=7 00:26:36.059 07:35:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:26:36.059 07:35:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:26:36.059 07:35:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:26:36.059 07:35:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=7 00:26:36.059 07:35:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:26:36.059 07:35:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:26:36.059 07:35:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:26:36.059 07:35:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:26:36.059 07:35:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=68345 00:26:36.059 07:35:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:26:36.059 07:35:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:26:36.059 07:35:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 68345 /var/tmp/spdk-nbd.sock 00:26:36.059 07:35:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 68345 ']' 00:26:36.059 07:35:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:26:36.059 07:35:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:36.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:26:36.059 07:35:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:26:36.059 07:35:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:36.059 07:35:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:26:36.059 [2024-07-15 07:35:14.507775] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:26:36.059 [2024-07-15 07:35:14.507964] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:36.318 [2024-07-15 07:35:14.686352] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.576 [2024-07-15 07:35:14.975739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.142 07:35:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:37.142 07:35:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:26:37.142 07:35:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:26:37.142 07:35:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:37.142 07:35:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:26:37.142 07:35:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:26:37.142 07:35:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:26:37.142 07:35:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:37.142 07:35:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:26:37.142 07:35:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:26:37.142 07:35:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:26:37.142 07:35:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:26:37.142 07:35:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:26:37.142 07:35:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:26:37.142 07:35:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:26:37.709 07:35:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:26:37.709 07:35:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:26:37.709 07:35:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:26:37.709 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:26:37.709 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:26:37.709 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:37.709 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:37.709 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:26:37.709 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:26:37.709 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:37.709 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:37.709 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:37.709 1+0 records in 00:26:37.709 1+0 records out 00:26:37.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000612532 s, 6.7 MB/s 00:26:37.709 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:37.709 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:26:37.709 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:37.709 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:37.709 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:26:37.709 07:35:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:37.709 07:35:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:26:37.709 07:35:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:26:37.967 07:35:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:26:37.967 07:35:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:26:37.967 07:35:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:26:37.967 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:26:37.967 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:26:37.967 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:37.967 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:37.968 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:26:37.968 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:26:37.968 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:37.968 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:37.968 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:37.968 1+0 records in 00:26:37.968 1+0 records out 00:26:37.968 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000591971 s, 6.9 MB/s 00:26:37.968 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:37.968 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:26:37.968 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:37.968 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:37.968 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:26:37.968 07:35:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:37.968 07:35:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:26:37.968 07:35:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:26:38.226 07:35:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:26:38.226 07:35:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:26:38.226 07:35:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:26:38.226 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:26:38.226 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:26:38.226 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:38.226 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:38.226 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:26:38.226 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:26:38.226 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:38.226 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:38.226 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:38.226 1+0 records in 00:26:38.226 1+0 records out 00:26:38.226 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000525228 s, 7.8 MB/s 00:26:38.226 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:38.226 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:26:38.226 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:38.226 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:38.226 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:26:38.226 07:35:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:38.226 07:35:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:26:38.226 07:35:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:26:38.484 07:35:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:26:38.484 07:35:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:26:38.484 07:35:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:26:38.484 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:26:38.484 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:26:38.484 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:38.484 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:38.484 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:26:38.484 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:26:38.484 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:38.484 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:38.484 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:38.484 1+0 records in 00:26:38.484 1+0 records out 00:26:38.484 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000584767 s, 7.0 MB/s 00:26:38.484 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:38.484 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:26:38.484 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:38.484 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:38.484 07:35:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:26:38.484 07:35:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:38.484 07:35:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:26:38.484 07:35:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:26:38.742 07:35:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:26:38.742 07:35:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:26:38.742 07:35:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:26:38.742 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:26:38.742 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:26:38.742 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:38.742 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:38.742 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:26:38.742 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:26:38.742 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:38.742 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:38.742 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:38.742 1+0 records in 00:26:38.742 1+0 records out 00:26:38.742 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000765942 s, 5.3 MB/s 00:26:38.743 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:38.743 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:26:38.743 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:38.743 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:38.743 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:26:38.743 07:35:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:38.743 07:35:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:26:38.743 07:35:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:26:39.001 07:35:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:26:39.001 07:35:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:26:39.001 07:35:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:26:39.001 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:26:39.001 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:26:39.001 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:39.001 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:39.001 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:26:39.001 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:26:39.001 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:39.001 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:39.001 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:39.001 1+0 records in 00:26:39.001 1+0 records out 00:26:39.001 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000692403 s, 5.9 MB/s 00:26:39.001 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:39.001 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:26:39.001 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:39.001 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:39.001 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:26:39.001 07:35:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:39.001 07:35:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:26:39.001 07:35:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:26:39.259 07:35:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:26:39.259 07:35:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:26:39.259 07:35:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:26:39.259 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:26:39.259 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:26:39.259 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:39.259 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:39.259 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:26:39.259 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:26:39.259 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:39.259 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:39.259 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:39.259 1+0 records in 00:26:39.259 1+0 records out 00:26:39.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000766022 s, 5.3 MB/s 00:26:39.259 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:39.259 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:26:39.259 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:39.259 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:39.259 07:35:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:26:39.259 07:35:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:39.259 07:35:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:26:39.259 07:35:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:39.516 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:26:39.516 { 00:26:39.516 "nbd_device": "/dev/nbd0", 00:26:39.516 "bdev_name": "Nvme0n1p1" 00:26:39.516 }, 00:26:39.516 { 00:26:39.516 "nbd_device": "/dev/nbd1", 00:26:39.516 "bdev_name": "Nvme0n1p2" 00:26:39.516 }, 00:26:39.516 { 00:26:39.516 "nbd_device": "/dev/nbd2", 00:26:39.516 "bdev_name": "Nvme1n1" 00:26:39.516 }, 00:26:39.516 { 00:26:39.516 "nbd_device": "/dev/nbd3", 00:26:39.516 "bdev_name": "Nvme2n1" 00:26:39.516 }, 00:26:39.516 { 00:26:39.516 "nbd_device": "/dev/nbd4", 00:26:39.516 "bdev_name": "Nvme2n2" 00:26:39.516 }, 00:26:39.516 { 00:26:39.516 "nbd_device": "/dev/nbd5", 00:26:39.516 "bdev_name": "Nvme2n3" 00:26:39.516 }, 00:26:39.516 { 00:26:39.516 "nbd_device": "/dev/nbd6", 00:26:39.516 "bdev_name": "Nvme3n1" 00:26:39.516 } 00:26:39.516 ]' 00:26:39.516 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:26:39.516 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:26:39.516 { 00:26:39.516 "nbd_device": "/dev/nbd0", 00:26:39.517 "bdev_name": "Nvme0n1p1" 00:26:39.517 }, 00:26:39.517 { 00:26:39.517 "nbd_device": "/dev/nbd1", 00:26:39.517 "bdev_name": "Nvme0n1p2" 00:26:39.517 }, 00:26:39.517 { 00:26:39.517 "nbd_device": "/dev/nbd2", 00:26:39.517 "bdev_name": "Nvme1n1" 00:26:39.517 }, 00:26:39.517 { 00:26:39.517 "nbd_device": "/dev/nbd3", 00:26:39.517 "bdev_name": "Nvme2n1" 00:26:39.517 }, 00:26:39.517 { 00:26:39.517 "nbd_device": "/dev/nbd4", 00:26:39.517 "bdev_name": "Nvme2n2" 00:26:39.517 }, 00:26:39.517 { 00:26:39.517 "nbd_device": "/dev/nbd5", 00:26:39.517 "bdev_name": "Nvme2n3" 00:26:39.517 }, 00:26:39.517 { 00:26:39.517 "nbd_device": "/dev/nbd6", 00:26:39.517 "bdev_name": "Nvme3n1" 00:26:39.517 } 00:26:39.517 ]' 00:26:39.517 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:26:39.517 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:26:39.517 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:39.517 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:26:39.517 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:39.517 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:39.517 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:39.517 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:39.774 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:39.774 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:39.774 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:39.775 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:39.775 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:39.775 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:39.775 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:39.775 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:39.775 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:39.775 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:26:40.033 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:40.033 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:40.033 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:40.033 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:40.033 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:40.033 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:40.033 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:40.033 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:40.033 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:40.033 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:26:40.291 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:26:40.291 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:26:40.291 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:26:40.292 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:40.292 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:40.292 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:26:40.292 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:40.292 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:40.292 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:40.292 07:35:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:26:40.857 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:26:40.857 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:26:40.857 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:26:40.857 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:40.857 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:40.857 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:26:40.857 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:40.857 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:40.857 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:40.857 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:26:40.857 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:26:40.857 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:26:40.857 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:26:40.857 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:40.857 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:40.857 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:26:40.857 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:40.857 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:40.857 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:40.857 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:26:41.115 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:26:41.115 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:26:41.115 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:26:41.115 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:41.115 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:41.115 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:26:41.115 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:41.115 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:41.115 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:41.115 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:26:41.373 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:26:41.373 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:26:41.373 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:26:41.373 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:41.373 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:41.373 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:26:41.373 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:41.373 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:41.373 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:41.373 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:41.373 07:35:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:41.630 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:41.630 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:41.630 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:41.888 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:41.888 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:26:41.888 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:41.888 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:26:41.888 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:26:41.888 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:26:41.888 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:26:41.888 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:26:41.888 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:26:41.888 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:26:41.888 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:41.888 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:26:41.888 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:26:41.888 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:26:41.888 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:26:41.888 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:26:41.888 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:41.888 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:26:41.888 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:41.888 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:26:41.888 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:41.888 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:26:41.888 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:41.888 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:26:41.888 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:26:42.147 /dev/nbd0 00:26:42.147 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:42.147 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:42.147 07:35:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:26:42.147 07:35:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:26:42.147 07:35:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:42.147 07:35:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:42.147 07:35:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:26:42.147 07:35:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:26:42.147 07:35:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:42.147 07:35:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:42.147 07:35:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:42.147 1+0 records in 00:26:42.147 1+0 records out 00:26:42.147 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114138 s, 3.6 MB/s 00:26:42.147 07:35:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:42.147 07:35:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:26:42.147 07:35:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:42.147 07:35:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:42.147 07:35:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:26:42.147 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:42.147 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:26:42.147 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:26:42.405 /dev/nbd1 00:26:42.405 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:42.405 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:42.405 07:35:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:26:42.405 07:35:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:26:42.405 07:35:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:42.405 07:35:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:42.405 07:35:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:26:42.405 07:35:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:26:42.405 07:35:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:42.405 07:35:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:42.405 07:35:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:42.405 1+0 records in 00:26:42.405 1+0 records out 00:26:42.405 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000518201 s, 7.9 MB/s 00:26:42.405 07:35:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:42.405 07:35:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:26:42.405 07:35:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:42.405 07:35:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:42.405 07:35:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:26:42.405 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:42.405 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:26:42.405 07:35:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:26:42.663 /dev/nbd10 00:26:42.663 07:35:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:26:42.663 07:35:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:26:42.663 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:26:42.663 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:26:42.663 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:42.663 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:42.663 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:26:42.663 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:26:42.663 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:42.663 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:42.663 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:42.663 1+0 records in 00:26:42.663 1+0 records out 00:26:42.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000897001 s, 4.6 MB/s 00:26:42.663 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:42.663 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:26:42.663 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:42.663 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:42.663 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:26:42.663 07:35:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:42.663 07:35:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:26:42.663 07:35:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:26:42.921 /dev/nbd11 00:26:42.921 07:35:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:26:42.921 07:35:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:26:42.921 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:26:42.921 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:26:42.921 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:42.921 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:42.921 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:26:42.921 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:26:42.921 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:42.921 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:42.921 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:42.921 1+0 records in 00:26:42.921 1+0 records out 00:26:42.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000584287 s, 7.0 MB/s 00:26:42.921 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:42.921 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:26:42.921 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:42.921 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:42.921 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:26:42.921 07:35:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:42.921 07:35:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:26:42.921 07:35:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:26:43.180 /dev/nbd12 00:26:43.180 07:35:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:26:43.180 07:35:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:26:43.180 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:26:43.180 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:26:43.180 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:43.180 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:43.180 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:26:43.180 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:26:43.180 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:43.180 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:43.180 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:43.180 1+0 records in 00:26:43.180 1+0 records out 00:26:43.180 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00063829 s, 6.4 MB/s 00:26:43.180 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:43.180 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:26:43.180 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:43.180 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:43.180 07:35:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:26:43.180 07:35:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:43.180 07:35:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:26:43.180 07:35:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:26:43.438 /dev/nbd13 00:26:43.696 07:35:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:26:43.696 07:35:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:26:43.696 07:35:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:26:43.696 07:35:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:26:43.696 07:35:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:43.696 07:35:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:43.696 07:35:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:26:43.696 07:35:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:26:43.696 07:35:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:43.696 07:35:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:43.696 07:35:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:43.696 1+0 records in 00:26:43.696 1+0 records out 00:26:43.696 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527223 s, 7.8 MB/s 00:26:43.696 07:35:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:43.696 07:35:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:26:43.696 07:35:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:43.696 07:35:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:43.696 07:35:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:26:43.696 07:35:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:43.697 07:35:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:26:43.697 07:35:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:26:43.955 /dev/nbd14 00:26:43.955 07:35:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:26:43.955 07:35:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:26:43.955 07:35:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:26:43.955 07:35:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:26:43.955 07:35:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:43.955 07:35:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:43.955 07:35:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:26:43.955 07:35:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:26:43.955 07:35:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:43.955 07:35:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:43.955 07:35:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:43.955 1+0 records in 00:26:43.955 1+0 records out 00:26:43.955 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000837336 s, 4.9 MB/s 00:26:43.955 07:35:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:43.955 07:35:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:26:43.955 07:35:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:43.955 07:35:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:43.955 07:35:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:26:43.955 07:35:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:43.955 07:35:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:26:43.955 07:35:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:43.956 07:35:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:43.956 07:35:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:44.214 07:35:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:26:44.214 { 00:26:44.214 "nbd_device": "/dev/nbd0", 00:26:44.214 "bdev_name": "Nvme0n1p1" 00:26:44.214 }, 00:26:44.214 { 00:26:44.214 "nbd_device": "/dev/nbd1", 00:26:44.214 "bdev_name": "Nvme0n1p2" 00:26:44.214 }, 00:26:44.214 { 00:26:44.214 "nbd_device": "/dev/nbd10", 00:26:44.214 "bdev_name": "Nvme1n1" 00:26:44.214 }, 00:26:44.214 { 00:26:44.214 "nbd_device": "/dev/nbd11", 00:26:44.214 "bdev_name": "Nvme2n1" 00:26:44.214 }, 00:26:44.214 { 00:26:44.214 "nbd_device": "/dev/nbd12", 00:26:44.214 "bdev_name": "Nvme2n2" 00:26:44.214 }, 00:26:44.214 { 00:26:44.214 "nbd_device": "/dev/nbd13", 00:26:44.214 "bdev_name": "Nvme2n3" 00:26:44.214 }, 00:26:44.214 { 00:26:44.214 "nbd_device": "/dev/nbd14", 00:26:44.214 "bdev_name": "Nvme3n1" 00:26:44.214 } 00:26:44.214 ]' 00:26:44.215 07:35:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:26:44.215 { 00:26:44.215 "nbd_device": "/dev/nbd0", 00:26:44.215 "bdev_name": "Nvme0n1p1" 00:26:44.215 }, 00:26:44.215 { 00:26:44.215 "nbd_device": "/dev/nbd1", 00:26:44.215 "bdev_name": "Nvme0n1p2" 00:26:44.215 }, 00:26:44.215 { 00:26:44.215 "nbd_device": "/dev/nbd10", 00:26:44.215 "bdev_name": "Nvme1n1" 00:26:44.215 }, 00:26:44.215 { 00:26:44.215 "nbd_device": "/dev/nbd11", 00:26:44.215 "bdev_name": "Nvme2n1" 00:26:44.215 }, 00:26:44.215 { 00:26:44.215 "nbd_device": "/dev/nbd12", 00:26:44.215 "bdev_name": "Nvme2n2" 00:26:44.215 }, 00:26:44.215 { 00:26:44.215 "nbd_device": "/dev/nbd13", 00:26:44.215 "bdev_name": "Nvme2n3" 00:26:44.215 }, 00:26:44.215 { 00:26:44.215 "nbd_device": "/dev/nbd14", 00:26:44.215 "bdev_name": "Nvme3n1" 00:26:44.215 } 00:26:44.215 ]' 00:26:44.215 07:35:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:44.215 07:35:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:26:44.215 /dev/nbd1 00:26:44.215 /dev/nbd10 00:26:44.215 /dev/nbd11 00:26:44.215 /dev/nbd12 00:26:44.215 /dev/nbd13 00:26:44.215 /dev/nbd14' 00:26:44.215 07:35:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:26:44.215 /dev/nbd1 00:26:44.215 /dev/nbd10 00:26:44.215 /dev/nbd11 00:26:44.215 /dev/nbd12 00:26:44.215 /dev/nbd13 00:26:44.215 /dev/nbd14' 00:26:44.215 07:35:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:44.215 07:35:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:26:44.215 07:35:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:26:44.215 07:35:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:26:44.215 07:35:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:26:44.215 07:35:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:26:44.215 07:35:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:26:44.215 07:35:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:44.215 07:35:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:26:44.215 07:35:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:44.215 07:35:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:26:44.215 07:35:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:26:44.215 256+0 records in 00:26:44.215 256+0 records out 00:26:44.215 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00742303 s, 141 MB/s 00:26:44.215 07:35:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:44.215 07:35:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:26:44.473 256+0 records in 00:26:44.473 256+0 records out 00:26:44.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151926 s, 6.9 MB/s 00:26:44.473 07:35:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:44.473 07:35:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:26:44.473 256+0 records in 00:26:44.473 256+0 records out 00:26:44.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128438 s, 8.2 MB/s 00:26:44.473 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:44.473 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:26:44.731 256+0 records in 00:26:44.731 256+0 records out 00:26:44.731 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15592 s, 6.7 MB/s 00:26:44.731 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:44.731 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:26:44.990 256+0 records in 00:26:44.990 256+0 records out 00:26:44.990 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154228 s, 6.8 MB/s 00:26:44.990 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:44.990 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:26:44.990 256+0 records in 00:26:44.990 256+0 records out 00:26:44.990 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150466 s, 7.0 MB/s 00:26:44.990 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:44.990 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:26:45.247 256+0 records in 00:26:45.247 256+0 records out 00:26:45.247 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151474 s, 6.9 MB/s 00:26:45.247 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:45.247 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:26:45.247 256+0 records in 00:26:45.247 256+0 records out 00:26:45.247 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148227 s, 7.1 MB/s 00:26:45.247 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:26:45.247 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:26:45.247 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:45.247 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:26:45.247 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:45.248 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:26:45.248 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:26:45.248 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:45.248 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:26:45.506 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:45.506 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:26:45.506 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:45.506 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:26:45.506 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:45.506 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:26:45.506 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:45.506 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:26:45.506 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:45.506 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:26:45.506 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:45.506 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:26:45.506 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:45.506 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:26:45.506 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:45.506 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:26:45.506 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:45.506 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:45.506 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:45.506 07:35:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:45.763 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:45.763 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:45.763 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:45.763 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:45.763 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:45.763 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:45.763 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:45.763 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:45.763 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:45.763 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:26:46.021 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:46.021 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:46.021 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:46.021 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:46.021 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:46.021 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:46.021 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:46.021 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:46.021 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:46.021 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:26:46.279 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:26:46.279 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:26:46.279 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:26:46.279 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:46.279 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:46.279 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:26:46.279 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:46.279 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:46.279 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:46.279 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:26:46.543 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:26:46.543 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:26:46.543 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:26:46.543 07:35:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:46.543 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:46.543 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:26:46.543 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:46.543 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:46.543 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:46.543 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:26:46.808 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:26:46.808 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:26:46.808 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:26:46.808 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:46.808 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:46.808 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:26:46.808 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:46.808 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:46.808 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:46.808 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:26:47.066 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:26:47.066 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:26:47.066 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:26:47.066 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:47.066 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:47.066 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:26:47.066 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:47.066 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:47.066 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:47.066 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:26:47.322 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:26:47.322 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:26:47.322 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:26:47.322 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:47.322 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:47.322 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:26:47.322 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:47.322 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:47.322 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:47.322 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:47.322 07:35:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:47.577 07:35:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:47.577 07:35:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:47.577 07:35:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:47.577 07:35:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:47.577 07:35:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:26:47.577 07:35:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:47.577 07:35:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:26:47.577 07:35:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:26:47.577 07:35:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:26:47.577 07:35:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:26:47.577 07:35:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:26:47.577 07:35:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:26:47.577 07:35:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:26:47.577 07:35:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:47.577 07:35:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:26:47.577 07:35:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:26:47.577 07:35:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:26:47.577 07:35:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:26:47.834 malloc_lvol_verify 00:26:47.834 07:35:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:26:48.091 a9245b16-c3b5-4f72-a720-aace3ae76c8c 00:26:48.091 07:35:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:26:48.349 33d030ab-bdea-44a8-a8be-5b23415ae03c 00:26:48.349 07:35:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:26:48.606 /dev/nbd0 00:26:48.607 07:35:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:26:48.607 mke2fs 1.46.5 (30-Dec-2021) 00:26:48.607 Discarding device blocks: 0/4096 done 00:26:48.607 Creating filesystem with 4096 1k blocks and 1024 inodes 00:26:48.607 00:26:48.607 Allocating group tables: 0/1 done 00:26:48.607 Writing inode tables: 0/1 done 00:26:48.607 Creating journal (1024 blocks): done 00:26:48.607 Writing superblocks and filesystem accounting information: 0/1 done 00:26:48.607 00:26:48.607 07:35:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:26:48.607 07:35:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:48.607 07:35:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:48.607 07:35:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:48.607 07:35:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:48.607 07:35:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:48.607 07:35:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:48.607 07:35:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:48.863 07:35:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:48.863 07:35:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:48.863 07:35:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:48.863 07:35:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:48.863 07:35:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:48.863 07:35:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:48.863 07:35:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:48.863 07:35:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:48.863 07:35:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:26:48.863 07:35:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:26:48.863 07:35:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 68345 00:26:48.863 07:35:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 68345 ']' 00:26:48.863 07:35:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 68345 00:26:48.863 07:35:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:26:48.863 07:35:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:48.863 07:35:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68345 00:26:48.863 killing process with pid 68345 00:26:48.863 07:35:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:48.863 07:35:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:48.863 07:35:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68345' 00:26:48.863 07:35:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@967 -- # kill 68345 00:26:48.863 07:35:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # wait 68345 00:26:50.273 07:35:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:26:50.273 00:26:50.273 real 0m14.458s 00:26:50.273 user 0m20.018s 00:26:50.273 sys 0m4.821s 00:26:50.273 07:35:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:50.273 07:35:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:26:50.273 ************************************ 00:26:50.273 END TEST bdev_nbd 00:26:50.273 ************************************ 00:26:50.531 07:35:28 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:26:50.531 07:35:28 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:26:50.531 07:35:28 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:26:50.531 07:35:28 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:26:50.531 skipping fio tests on NVMe due to multi-ns failures. 00:26:50.531 07:35:28 blockdev_nvme_gpt -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:26:50.531 07:35:28 blockdev_nvme_gpt -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:50.531 07:35:28 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:50.531 07:35:28 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:26:50.531 07:35:28 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:50.531 07:35:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:26:50.531 ************************************ 00:26:50.531 START TEST bdev_verify 00:26:50.531 ************************************ 00:26:50.531 07:35:28 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:50.531 [2024-07-15 07:35:29.020348] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:26:50.531 [2024-07-15 07:35:29.020589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68789 ] 00:26:50.788 [2024-07-15 07:35:29.192408] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:51.046 [2024-07-15 07:35:29.474827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.046 [2024-07-15 07:35:29.474843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.977 Running I/O for 5 seconds... 00:26:57.252 00:26:57.252 Latency(us) 00:26:57.252 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:57.252 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:57.252 Verification LBA range: start 0x0 length 0x5e800 00:26:57.252 Nvme0n1p1 : 5.06 1189.01 4.64 0.00 0.00 107174.24 22401.40 93418.59 00:26:57.252 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:57.252 Verification LBA range: start 0x5e800 length 0x5e800 00:26:57.252 Nvme0n1p1 : 5.06 1138.91 4.45 0.00 0.00 111906.64 22997.18 101044.60 00:26:57.252 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:57.252 Verification LBA range: start 0x0 length 0x5e7ff 00:26:57.252 Nvme0n1p2 : 5.06 1187.84 4.64 0.00 0.00 107027.38 26333.56 90558.84 00:26:57.252 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:57.252 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:26:57.252 Nvme0n1p2 : 5.06 1138.35 4.45 0.00 0.00 111723.40 26095.24 98661.47 00:26:57.252 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:57.252 Verification LBA range: start 0x0 length 0xa0000 00:26:57.252 Nvme1n1 : 5.09 1194.07 4.66 0.00 0.00 106362.68 11975.21 86745.83 00:26:57.252 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:57.252 Verification LBA range: start 0xa0000 length 0xa0000 00:26:57.252 Nvme1n1 : 5.09 1143.62 4.47 0.00 0.00 110896.66 10843.23 94848.47 00:26:57.252 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:57.252 Verification LBA range: start 0x0 length 0x80000 00:26:57.252 Nvme2n1 : 5.09 1193.55 4.66 0.00 0.00 106201.70 11558.17 85315.96 00:26:57.252 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:57.252 Verification LBA range: start 0x80000 length 0x80000 00:26:57.252 Nvme2n1 : 5.09 1143.12 4.47 0.00 0.00 110691.66 10307.03 93895.21 00:26:57.252 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:57.252 Verification LBA range: start 0x0 length 0x80000 00:26:57.252 Nvme2n2 : 5.10 1192.89 4.66 0.00 0.00 106010.29 11439.01 87699.08 00:26:57.252 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:57.252 Verification LBA range: start 0x80000 length 0x80000 00:26:57.252 Nvme2n2 : 5.11 1151.69 4.50 0.00 0.00 109861.01 12988.04 91988.71 00:26:57.252 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:57.252 Verification LBA range: start 0x0 length 0x80000 00:26:57.252 Nvme2n3 : 5.11 1201.75 4.69 0.00 0.00 105210.35 10545.34 91035.46 00:26:57.252 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:57.252 Verification LBA range: start 0x80000 length 0x80000 00:26:57.252 Nvme2n3 : 5.12 1150.53 4.49 0.00 0.00 109694.97 15132.86 95325.09 00:26:57.252 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:57.252 Verification LBA range: start 0x0 length 0x20000 00:26:57.252 Nvme3n1 : 5.12 1200.50 4.69 0.00 0.00 105071.25 13166.78 93895.21 00:26:57.252 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:57.252 Verification LBA range: start 0x20000 length 0x20000 00:26:57.252 Nvme3n1 : 5.12 1149.59 4.49 0.00 0.00 109557.67 13881.72 100091.35 00:26:57.252 =================================================================================================================== 00:26:57.252 Total : 16375.40 63.97 0.00 0.00 108331.75 10307.03 101044.60 00:26:58.626 00:26:58.626 real 0m8.311s 00:26:58.626 user 0m14.911s 00:26:58.626 sys 0m0.426s 00:26:58.626 07:35:37 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:58.626 07:35:37 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:26:58.626 ************************************ 00:26:58.626 END TEST bdev_verify 00:26:58.626 ************************************ 00:26:58.884 07:35:37 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:26:58.884 07:35:37 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:58.884 07:35:37 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:26:58.884 07:35:37 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:58.884 07:35:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:26:58.884 ************************************ 00:26:58.884 START TEST bdev_verify_big_io 00:26:58.884 ************************************ 00:26:58.884 07:35:37 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:58.884 [2024-07-15 07:35:37.419357] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:26:58.884 [2024-07-15 07:35:37.419580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68894 ] 00:26:59.142 [2024-07-15 07:35:37.601516] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:59.399 [2024-07-15 07:35:37.907272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.399 [2024-07-15 07:35:37.907301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:00.345 Running I/O for 5 seconds... 00:27:06.975 00:27:06.975 Latency(us) 00:27:06.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.975 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:06.975 Verification LBA range: start 0x0 length 0x5e80 00:27:06.975 Nvme0n1p1 : 5.66 142.14 8.88 0.00 0.00 874343.64 40513.16 1143901.09 00:27:06.975 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:06.975 Verification LBA range: start 0x5e80 length 0x5e80 00:27:06.975 Nvme0n1p1 : 5.63 125.04 7.81 0.00 0.00 987531.47 17277.67 1090519.04 00:27:06.975 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:06.975 Verification LBA range: start 0x0 length 0x5e7f 00:27:06.975 Nvme0n1p2 : 5.67 150.03 9.38 0.00 0.00 811755.39 109147.23 896055.85 00:27:06.975 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:06.975 Verification LBA range: start 0x5e7f length 0x5e7f 00:27:06.975 Nvme0n1p2 : 5.72 125.60 7.85 0.00 0.00 952628.57 83886.08 991380.95 00:27:06.975 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:06.975 Verification LBA range: start 0x0 length 0xa000 00:27:06.975 Nvme1n1 : 5.67 152.81 9.55 0.00 0.00 775762.14 83409.45 800730.76 00:27:06.975 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:06.975 Verification LBA range: start 0xa000 length 0xa000 00:27:06.975 Nvme1n1 : 5.76 119.55 7.47 0.00 0.00 969904.44 82456.20 1616713.54 00:27:06.975 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:06.975 Verification LBA range: start 0x0 length 0x8000 00:27:06.975 Nvme2n1 : 5.71 157.15 9.82 0.00 0.00 740874.28 56718.43 827421.79 00:27:06.975 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:06.975 Verification LBA range: start 0x8000 length 0x8000 00:27:06.975 Nvme2n1 : 5.79 130.15 8.13 0.00 0.00 869876.22 43849.54 1182031.13 00:27:06.975 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:06.975 Verification LBA range: start 0x0 length 0x8000 00:27:06.975 Nvme2n2 : 5.71 161.62 10.10 0.00 0.00 710995.69 37891.72 842673.80 00:27:06.975 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:06.975 Verification LBA range: start 0x8000 length 0x8000 00:27:06.975 Nvme2n2 : 5.85 139.75 8.73 0.00 0.00 788837.85 28359.21 1197283.14 00:27:06.975 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:06.975 Verification LBA range: start 0x0 length 0x8000 00:27:06.975 Nvme2n3 : 5.74 167.21 10.45 0.00 0.00 674724.96 20494.89 857925.82 00:27:06.975 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:06.975 Verification LBA range: start 0x8000 length 0x8000 00:27:06.975 Nvme2n3 : 5.92 153.98 9.62 0.00 0.00 708170.21 17873.45 1746355.67 00:27:06.975 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:06.975 Verification LBA range: start 0x0 length 0x2000 00:27:06.975 Nvme3n1 : 5.76 178.24 11.14 0.00 0.00 620761.15 5630.14 869364.83 00:27:06.975 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:06.975 Verification LBA range: start 0x2000 length 0x2000 00:27:06.975 Nvme3n1 : 5.94 170.09 10.63 0.00 0.00 625942.05 841.54 1769233.69 00:27:06.975 =================================================================================================================== 00:27:06.976 Total : 2073.36 129.58 0.00 0.00 779245.46 841.54 1769233.69 00:27:08.352 00:27:08.352 real 0m9.639s 00:27:08.352 user 0m17.417s 00:27:08.352 sys 0m0.496s 00:27:08.352 07:35:46 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:08.352 07:35:46 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:27:08.352 ************************************ 00:27:08.352 END TEST bdev_verify_big_io 00:27:08.352 ************************************ 00:27:08.352 07:35:46 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:27:08.352 07:35:46 blockdev_nvme_gpt -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:08.352 07:35:46 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:27:08.352 07:35:46 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:08.352 07:35:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:27:08.352 ************************************ 00:27:08.352 START TEST bdev_write_zeroes 00:27:08.352 ************************************ 00:27:08.352 07:35:46 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:08.610 [2024-07-15 07:35:47.061868] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:27:08.610 [2024-07-15 07:35:47.062086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69021 ] 00:27:08.868 [2024-07-15 07:35:47.232528] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.127 [2024-07-15 07:35:47.500805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.693 Running I/O for 1 seconds... 00:27:11.103 00:27:11.103 Latency(us) 00:27:11.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:11.103 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:11.103 Nvme0n1p1 : 1.02 6840.46 26.72 0.00 0.00 18625.69 13702.98 29789.09 00:27:11.103 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:11.103 Nvme0n1p2 : 1.02 6828.73 26.67 0.00 0.00 18617.72 14000.87 29908.25 00:27:11.103 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:11.103 Nvme1n1 : 1.02 6818.27 26.63 0.00 0.00 18584.48 14537.08 27286.81 00:27:11.103 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:11.103 Nvme2n1 : 1.03 6857.32 26.79 0.00 0.00 18423.63 11141.12 25499.46 00:27:11.103 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:11.103 Nvme2n2 : 1.03 6847.05 26.75 0.00 0.00 18398.00 10545.34 25380.31 00:27:11.103 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:11.103 Nvme2n3 : 1.03 6836.75 26.71 0.00 0.00 18385.17 10307.03 25380.31 00:27:11.103 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:11.103 Nvme3n1 : 1.03 6826.57 26.67 0.00 0.00 18364.81 8817.57 25380.31 00:27:11.103 =================================================================================================================== 00:27:11.103 Total : 47855.16 186.93 0.00 0.00 18485.16 8817.57 29908.25 00:27:12.037 00:27:12.037 real 0m3.661s 00:27:12.037 user 0m3.200s 00:27:12.037 sys 0m0.339s 00:27:12.038 07:35:50 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:12.038 07:35:50 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:27:12.038 ************************************ 00:27:12.038 END TEST bdev_write_zeroes 00:27:12.038 ************************************ 00:27:12.297 07:35:50 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:27:12.297 07:35:50 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:12.297 07:35:50 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:27:12.297 07:35:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:12.297 07:35:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:27:12.297 ************************************ 00:27:12.297 START TEST bdev_json_nonenclosed 00:27:12.297 ************************************ 00:27:12.297 07:35:50 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:12.297 [2024-07-15 07:35:50.787941] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:27:12.297 [2024-07-15 07:35:50.788165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69080 ] 00:27:12.556 [2024-07-15 07:35:50.970953] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.814 [2024-07-15 07:35:51.264738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.814 [2024-07-15 07:35:51.264866] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:27:12.814 [2024-07-15 07:35:51.264895] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:27:12.814 [2024-07-15 07:35:51.264915] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:13.382 00:27:13.382 real 0m1.064s 00:27:13.382 user 0m0.770s 00:27:13.382 sys 0m0.187s 00:27:13.382 07:35:51 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:27:13.382 07:35:51 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:13.382 07:35:51 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:27:13.382 ************************************ 00:27:13.382 END TEST bdev_json_nonenclosed 00:27:13.382 ************************************ 00:27:13.382 07:35:51 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:27:13.382 07:35:51 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # true 00:27:13.382 07:35:51 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:13.382 07:35:51 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:27:13.382 07:35:51 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:13.382 07:35:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:27:13.382 ************************************ 00:27:13.382 START TEST bdev_json_nonarray 00:27:13.382 ************************************ 00:27:13.382 07:35:51 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:13.382 [2024-07-15 07:35:51.884009] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:27:13.382 [2024-07-15 07:35:51.884185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69111 ] 00:27:13.641 [2024-07-15 07:35:52.055446] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.899 [2024-07-15 07:35:52.323996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.899 [2024-07-15 07:35:52.324128] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:27:13.899 [2024-07-15 07:35:52.324157] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:27:13.899 [2024-07-15 07:35:52.324176] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:14.465 00:27:14.465 real 0m1.012s 00:27:14.465 user 0m0.733s 00:27:14.465 sys 0m0.172s 00:27:14.465 07:35:52 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:27:14.465 07:35:52 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:14.465 ************************************ 00:27:14.465 END TEST bdev_json_nonarray 00:27:14.465 ************************************ 00:27:14.465 07:35:52 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:27:14.465 07:35:52 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:27:14.465 07:35:52 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # true 00:27:14.465 07:35:52 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:27:14.465 07:35:52 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:27:14.465 07:35:52 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:27:14.465 07:35:52 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:14.465 07:35:52 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:14.465 07:35:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:27:14.465 ************************************ 00:27:14.465 START TEST bdev_gpt_uuid 00:27:14.465 ************************************ 00:27:14.465 07:35:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1123 -- # bdev_gpt_uuid 00:27:14.465 07:35:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@614 -- # local bdev 00:27:14.465 07:35:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:27:14.465 07:35:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69142 00:27:14.465 07:35:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:27:14.465 07:35:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 69142 00:27:14.465 07:35:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:27:14.465 07:35:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@829 -- # '[' -z 69142 ']' 00:27:14.465 07:35:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:14.465 07:35:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:14.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:14.465 07:35:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:14.465 07:35:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:14.465 07:35:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:27:14.465 [2024-07-15 07:35:52.969702] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:27:14.465 [2024-07-15 07:35:52.969892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69142 ] 00:27:14.735 [2024-07-15 07:35:53.140147] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.031 [2024-07-15 07:35:53.420935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.963 07:35:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:15.963 07:35:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # return 0 00:27:15.963 07:35:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:15.963 07:35:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.963 07:35:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:27:16.221 Some configs were skipped because the RPC state that can call them passed over. 00:27:16.221 07:35:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.221 07:35:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:27:16.221 07:35:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.221 07:35:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:27:16.221 07:35:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.221 07:35:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:27:16.221 07:35:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.221 07:35:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:27:16.221 07:35:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.221 07:35:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # bdev='[ 00:27:16.221 { 00:27:16.221 "name": "Nvme0n1p1", 00:27:16.221 "aliases": [ 00:27:16.221 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:27:16.221 ], 00:27:16.221 "product_name": "GPT Disk", 00:27:16.221 "block_size": 4096, 00:27:16.221 "num_blocks": 774144, 00:27:16.221 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:27:16.221 "md_size": 64, 00:27:16.221 "md_interleave": false, 00:27:16.221 "dif_type": 0, 00:27:16.221 "assigned_rate_limits": { 00:27:16.221 "rw_ios_per_sec": 0, 00:27:16.221 "rw_mbytes_per_sec": 0, 00:27:16.221 "r_mbytes_per_sec": 0, 00:27:16.221 "w_mbytes_per_sec": 0 00:27:16.221 }, 00:27:16.221 "claimed": false, 00:27:16.221 "zoned": false, 00:27:16.221 "supported_io_types": { 00:27:16.221 "read": true, 00:27:16.221 "write": true, 00:27:16.221 "unmap": true, 00:27:16.221 "flush": true, 00:27:16.221 "reset": true, 00:27:16.221 "nvme_admin": false, 00:27:16.221 "nvme_io": false, 00:27:16.221 "nvme_io_md": false, 00:27:16.221 "write_zeroes": true, 00:27:16.221 "zcopy": false, 00:27:16.222 "get_zone_info": false, 00:27:16.222 "zone_management": false, 00:27:16.222 "zone_append": false, 00:27:16.222 "compare": true, 00:27:16.222 "compare_and_write": false, 00:27:16.222 "abort": true, 00:27:16.222 "seek_hole": false, 00:27:16.222 "seek_data": false, 00:27:16.222 "copy": true, 00:27:16.222 "nvme_iov_md": false 00:27:16.222 }, 00:27:16.222 "driver_specific": { 00:27:16.222 "gpt": { 00:27:16.222 "base_bdev": "Nvme0n1", 00:27:16.222 "offset_blocks": 256, 00:27:16.222 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:27:16.222 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:27:16.222 "partition_name": "SPDK_TEST_first" 00:27:16.222 } 00:27:16.222 } 00:27:16.222 } 00:27:16.222 ]' 00:27:16.222 07:35:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r length 00:27:16.222 07:35:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:27:16.222 07:35:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:27:16.222 07:35:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:27:16.222 07:35:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:27:16.480 07:35:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:27:16.480 07:35:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:27:16.480 07:35:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.480 07:35:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:27:16.480 07:35:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.480 07:35:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # bdev='[ 00:27:16.480 { 00:27:16.480 "name": "Nvme0n1p2", 00:27:16.480 "aliases": [ 00:27:16.480 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:27:16.480 ], 00:27:16.480 "product_name": "GPT Disk", 00:27:16.480 "block_size": 4096, 00:27:16.480 "num_blocks": 774143, 00:27:16.480 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:27:16.480 "md_size": 64, 00:27:16.480 "md_interleave": false, 00:27:16.480 "dif_type": 0, 00:27:16.480 "assigned_rate_limits": { 00:27:16.480 "rw_ios_per_sec": 0, 00:27:16.480 "rw_mbytes_per_sec": 0, 00:27:16.480 "r_mbytes_per_sec": 0, 00:27:16.480 "w_mbytes_per_sec": 0 00:27:16.480 }, 00:27:16.480 "claimed": false, 00:27:16.480 "zoned": false, 00:27:16.480 "supported_io_types": { 00:27:16.480 "read": true, 00:27:16.480 "write": true, 00:27:16.480 "unmap": true, 00:27:16.480 "flush": true, 00:27:16.480 "reset": true, 00:27:16.480 "nvme_admin": false, 00:27:16.480 "nvme_io": false, 00:27:16.480 "nvme_io_md": false, 00:27:16.480 "write_zeroes": true, 00:27:16.480 "zcopy": false, 00:27:16.480 "get_zone_info": false, 00:27:16.480 "zone_management": false, 00:27:16.480 "zone_append": false, 00:27:16.480 "compare": true, 00:27:16.480 "compare_and_write": false, 00:27:16.480 "abort": true, 00:27:16.480 "seek_hole": false, 00:27:16.480 "seek_data": false, 00:27:16.480 "copy": true, 00:27:16.480 "nvme_iov_md": false 00:27:16.480 }, 00:27:16.480 "driver_specific": { 00:27:16.480 "gpt": { 00:27:16.480 "base_bdev": "Nvme0n1", 00:27:16.480 "offset_blocks": 774400, 00:27:16.480 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:27:16.480 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:27:16.480 "partition_name": "SPDK_TEST_second" 00:27:16.480 } 00:27:16.480 } 00:27:16.480 } 00:27:16.480 ]' 00:27:16.480 07:35:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r length 00:27:16.480 07:35:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:27:16.480 07:35:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:27:16.480 07:35:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:27:16.480 07:35:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:27:16.480 07:35:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:27:16.480 07:35:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@631 -- # killprocess 69142 00:27:16.480 07:35:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@948 -- # '[' -z 69142 ']' 00:27:16.480 07:35:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # kill -0 69142 00:27:16.480 07:35:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # uname 00:27:16.480 07:35:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:16.480 07:35:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69142 00:27:16.480 07:35:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:16.480 07:35:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:16.480 killing process with pid 69142 00:27:16.480 07:35:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69142' 00:27:16.480 07:35:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@967 -- # kill 69142 00:27:16.480 07:35:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # wait 69142 00:27:19.010 00:27:19.010 real 0m4.646s 00:27:19.010 user 0m4.767s 00:27:19.010 sys 0m0.682s 00:27:19.010 07:35:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:19.010 ************************************ 00:27:19.010 END TEST bdev_gpt_uuid 00:27:19.010 ************************************ 00:27:19.010 07:35:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:27:19.010 07:35:57 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:27:19.010 07:35:57 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:27:19.010 07:35:57 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:27:19.010 07:35:57 blockdev_nvme_gpt -- bdev/blockdev.sh@811 -- # cleanup 00:27:19.010 07:35:57 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:27:19.010 07:35:57 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:19.010 07:35:57 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:27:19.010 07:35:57 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:27:19.010 07:35:57 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:27:19.010 07:35:57 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:19.267 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:19.525 Waiting for block devices as requested 00:27:19.525 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:19.783 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:19.783 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:27:19.783 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:27:25.042 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:27:25.042 07:36:03 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme1n1 ]] 00:27:25.042 07:36:03 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme1n1 00:27:25.300 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:27:25.300 /dev/nvme1n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:27:25.300 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:27:25.300 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:27:25.300 07:36:03 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:27:25.300 00:27:25.300 real 1m9.383s 00:27:25.300 user 1m27.491s 00:27:25.300 sys 0m11.319s 00:27:25.300 07:36:03 blockdev_nvme_gpt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:25.300 07:36:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:27:25.300 ************************************ 00:27:25.300 END TEST blockdev_nvme_gpt 00:27:25.300 ************************************ 00:27:25.300 07:36:03 -- common/autotest_common.sh@1142 -- # return 0 00:27:25.300 07:36:03 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:27:25.300 07:36:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:25.300 07:36:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:25.300 07:36:03 -- common/autotest_common.sh@10 -- # set +x 00:27:25.300 ************************************ 00:27:25.300 START TEST nvme 00:27:25.300 ************************************ 00:27:25.300 07:36:03 nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:27:25.300 * Looking for test storage... 00:27:25.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:27:25.300 07:36:03 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:25.866 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:26.432 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:26.432 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:26.432 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:27:26.432 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:27:26.691 07:36:05 nvme -- nvme/nvme.sh@79 -- # uname 00:27:26.691 07:36:05 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:27:26.691 07:36:05 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:27:26.691 07:36:05 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:27:26.691 07:36:05 nvme -- common/autotest_common.sh@1080 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:27:26.691 07:36:05 nvme -- common/autotest_common.sh@1066 -- # _randomize_va_space=2 00:27:26.691 07:36:05 nvme -- common/autotest_common.sh@1067 -- # echo 0 00:27:26.691 07:36:05 nvme -- common/autotest_common.sh@1069 -- # stubpid=69785 00:27:26.691 07:36:05 nvme -- common/autotest_common.sh@1068 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:27:26.691 Waiting for stub to ready for secondary processes... 00:27:26.691 07:36:05 nvme -- common/autotest_common.sh@1070 -- # echo Waiting for stub to ready for secondary processes... 00:27:26.691 07:36:05 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:27:26.691 07:36:05 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/69785 ]] 00:27:26.691 07:36:05 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:27:26.691 [2024-07-15 07:36:05.132178] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:27:26.691 [2024-07-15 07:36:05.132519] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:27:27.626 07:36:06 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:27:27.626 07:36:06 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/69785 ]] 00:27:27.626 07:36:06 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:27:28.559 [2024-07-15 07:36:06.828715] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:28.559 [2024-07-15 07:36:07.070222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:28.559 [2024-07-15 07:36:07.070367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:28.559 [2024-07-15 07:36:07.070388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:28.559 07:36:07 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:27:28.559 07:36:07 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/69785 ]] 00:27:28.559 07:36:07 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:27:28.559 [2024-07-15 07:36:07.089574] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:27:28.559 [2024-07-15 07:36:07.089633] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:27:28.559 [2024-07-15 07:36:07.098946] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:27:28.559 [2024-07-15 07:36:07.099074] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:27:28.559 [2024-07-15 07:36:07.101290] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:27:28.559 [2024-07-15 07:36:07.101582] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:27:28.559 [2024-07-15 07:36:07.101750] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:27:28.559 [2024-07-15 07:36:07.103869] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:27:28.559 [2024-07-15 07:36:07.104121] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:27:28.559 [2024-07-15 07:36:07.104194] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:27:28.559 [2024-07-15 07:36:07.106398] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:27:28.559 [2024-07-15 07:36:07.106595] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:27:28.559 [2024-07-15 07:36:07.106676] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:27:28.559 [2024-07-15 07:36:07.106745] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:27:28.559 [2024-07-15 07:36:07.106797] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:27:29.491 07:36:08 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:27:29.491 done. 00:27:29.491 07:36:08 nvme -- common/autotest_common.sh@1076 -- # echo done. 00:27:29.491 07:36:08 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:27:29.491 07:36:08 nvme -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:27:29.491 07:36:08 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:29.491 07:36:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:27:29.491 ************************************ 00:27:29.491 START TEST nvme_reset 00:27:29.491 ************************************ 00:27:29.491 07:36:08 nvme.nvme_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:27:30.056 Initializing NVMe Controllers 00:27:30.056 Skipping QEMU NVMe SSD at 0000:00:10.0 00:27:30.056 Skipping QEMU NVMe SSD at 0000:00:11.0 00:27:30.056 Skipping QEMU NVMe SSD at 0000:00:13.0 00:27:30.056 Skipping QEMU NVMe SSD at 0000:00:12.0 00:27:30.056 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:27:30.056 00:27:30.056 real 0m0.305s 00:27:30.056 user 0m0.099s 00:27:30.056 sys 0m0.162s 00:27:30.056 07:36:08 nvme.nvme_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:30.056 07:36:08 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:27:30.056 ************************************ 00:27:30.056 END TEST nvme_reset 00:27:30.056 ************************************ 00:27:30.056 07:36:08 nvme -- common/autotest_common.sh@1142 -- # return 0 00:27:30.056 07:36:08 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:27:30.056 07:36:08 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:30.056 07:36:08 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:30.056 07:36:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:27:30.056 ************************************ 00:27:30.056 START TEST nvme_identify 00:27:30.056 ************************************ 00:27:30.056 07:36:08 nvme.nvme_identify -- common/autotest_common.sh@1123 -- # nvme_identify 00:27:30.056 07:36:08 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:27:30.056 07:36:08 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:27:30.056 07:36:08 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:27:30.056 07:36:08 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:27:30.056 07:36:08 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:27:30.056 07:36:08 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:27:30.056 07:36:08 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:30.056 07:36:08 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:30.056 07:36:08 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:27:30.057 07:36:08 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:27:30.057 07:36:08 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:27:30.057 07:36:08 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:27:30.317 [2024-07-15 07:36:08.750052] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 69830 terminated unexpected 00:27:30.317 ===================================================== 00:27:30.317 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:27:30.317 ===================================================== 00:27:30.317 Controller Capabilities/Features 00:27:30.317 ================================ 00:27:30.317 Vendor ID: 1b36 00:27:30.317 Subsystem Vendor ID: 1af4 00:27:30.317 Serial Number: 12340 00:27:30.317 Model Number: QEMU NVMe Ctrl 00:27:30.317 Firmware Version: 8.0.0 00:27:30.317 Recommended Arb Burst: 6 00:27:30.317 IEEE OUI Identifier: 00 54 52 00:27:30.317 Multi-path I/O 00:27:30.317 May have multiple subsystem ports: No 00:27:30.317 May have multiple controllers: No 00:27:30.317 Associated with SR-IOV VF: No 00:27:30.317 Max Data Transfer Size: 524288 00:27:30.317 Max Number of Namespaces: 256 00:27:30.317 Max Number of I/O Queues: 64 00:27:30.317 NVMe Specification Version (VS): 1.4 00:27:30.317 NVMe Specification Version (Identify): 1.4 00:27:30.317 Maximum Queue Entries: 2048 00:27:30.317 Contiguous Queues Required: Yes 00:27:30.317 Arbitration Mechanisms Supported 00:27:30.317 Weighted Round Robin: Not Supported 00:27:30.317 Vendor Specific: Not Supported 00:27:30.317 Reset Timeout: 7500 ms 00:27:30.317 Doorbell Stride: 4 bytes 00:27:30.317 NVM Subsystem Reset: Not Supported 00:27:30.317 Command Sets Supported 00:27:30.317 NVM Command Set: Supported 00:27:30.317 Boot Partition: Not Supported 00:27:30.317 Memory Page Size Minimum: 4096 bytes 00:27:30.317 Memory Page Size Maximum: 65536 bytes 00:27:30.317 Persistent Memory Region: Not Supported 00:27:30.317 Optional Asynchronous Events Supported 00:27:30.317 Namespace Attribute Notices: Supported 00:27:30.317 Firmware Activation Notices: Not Supported 00:27:30.317 ANA Change Notices: Not Supported 00:27:30.317 PLE Aggregate Log Change Notices: Not Supported 00:27:30.317 LBA Status Info Alert Notices: Not Supported 00:27:30.317 EGE Aggregate Log Change Notices: Not Supported 00:27:30.317 Normal NVM Subsystem Shutdown event: Not Supported 00:27:30.317 Zone Descriptor Change Notices: Not Supported 00:27:30.317 Discovery Log Change Notices: Not Supported 00:27:30.317 Controller Attributes 00:27:30.317 128-bit Host Identifier: Not Supported 00:27:30.317 Non-Operational Permissive Mode: Not Supported 00:27:30.317 NVM Sets: Not Supported 00:27:30.317 Read Recovery Levels: Not Supported 00:27:30.317 Endurance Groups: Not Supported 00:27:30.317 Predictable Latency Mode: Not Supported 00:27:30.317 Traffic Based Keep ALive: Not Supported 00:27:30.317 Namespace Granularity: Not Supported 00:27:30.317 SQ Associations: Not Supported 00:27:30.317 UUID List: Not Supported 00:27:30.317 Multi-Domain Subsystem: Not Supported 00:27:30.317 Fixed Capacity Management: Not Supported 00:27:30.317 Variable Capacity Management: Not Supported 00:27:30.317 Delete Endurance Group: Not Supported 00:27:30.317 Delete NVM Set: Not Supported 00:27:30.317 Extended LBA Formats Supported: Supported 00:27:30.317 Flexible Data Placement Supported: Not Supported 00:27:30.317 00:27:30.317 Controller Memory Buffer Support 00:27:30.317 ================================ 00:27:30.317 Supported: No 00:27:30.317 00:27:30.317 Persistent Memory Region Support 00:27:30.317 ================================ 00:27:30.317 Supported: No 00:27:30.317 00:27:30.317 Admin Command Set Attributes 00:27:30.317 ============================ 00:27:30.317 Security Send/Receive: Not Supported 00:27:30.317 Format NVM: Supported 00:27:30.317 Firmware Activate/Download: Not Supported 00:27:30.317 Namespace Management: Supported 00:27:30.317 Device Self-Test: Not Supported 00:27:30.317 Directives: Supported 00:27:30.317 NVMe-MI: Not Supported 00:27:30.317 Virtualization Management: Not Supported 00:27:30.317 Doorbell Buffer Config: Supported 00:27:30.317 Get LBA Status Capability: Not Supported 00:27:30.317 Command & Feature Lockdown Capability: Not Supported 00:27:30.317 Abort Command Limit: 4 00:27:30.317 Async Event Request Limit: 4 00:27:30.317 Number of Firmware Slots: N/A 00:27:30.317 Firmware Slot 1 Read-Only: N/A 00:27:30.317 Firmware Activation Without Reset: N/A 00:27:30.317 Multiple Update Detection Support: N/A 00:27:30.317 Firmware Update Granularity: No Information Provided 00:27:30.317 Per-Namespace SMART Log: Yes 00:27:30.317 Asymmetric Namespace Access Log Page: Not Supported 00:27:30.317 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:27:30.317 Command Effects Log Page: Supported 00:27:30.317 Get Log Page Extended Data: Supported 00:27:30.317 Telemetry Log Pages: Not Supported 00:27:30.317 Persistent Event Log Pages: Not Supported 00:27:30.317 Supported Log Pages Log Page: May Support 00:27:30.317 Commands Supported & Effects Log Page: Not Supported 00:27:30.317 Feature Identifiers & Effects Log Page:May Support 00:27:30.317 NVMe-MI Commands & Effects Log Page: May Support 00:27:30.317 Data Area 4 for Telemetry Log: Not Supported 00:27:30.317 Error Log Page Entries Supported: 1 00:27:30.317 Keep Alive: Not Supported 00:27:30.317 00:27:30.317 NVM Command Set Attributes 00:27:30.317 ========================== 00:27:30.317 Submission Queue Entry Size 00:27:30.317 Max: 64 00:27:30.317 Min: 64 00:27:30.317 Completion Queue Entry Size 00:27:30.317 Max: 16 00:27:30.317 Min: 16 00:27:30.317 Number of Namespaces: 256 00:27:30.317 Compare Command: Supported 00:27:30.317 Write Uncorrectable Command: Not Supported 00:27:30.317 Dataset Management Command: Supported 00:27:30.317 Write Zeroes Command: Supported 00:27:30.317 Set Features Save Field: Supported 00:27:30.317 Reservations: Not Supported 00:27:30.317 Timestamp: Supported 00:27:30.317 Copy: Supported 00:27:30.317 Volatile Write Cache: Present 00:27:30.317 Atomic Write Unit (Normal): 1 00:27:30.317 Atomic Write Unit (PFail): 1 00:27:30.317 Atomic Compare & Write Unit: 1 00:27:30.317 Fused Compare & Write: Not Supported 00:27:30.317 Scatter-Gather List 00:27:30.317 SGL Command Set: Supported 00:27:30.317 SGL Keyed: Not Supported 00:27:30.317 SGL Bit Bucket Descriptor: Not Supported 00:27:30.317 SGL Metadata Pointer: Not Supported 00:27:30.317 Oversized SGL: Not Supported 00:27:30.317 SGL Metadata Address: Not Supported 00:27:30.317 SGL Offset: Not Supported 00:27:30.317 Transport SGL Data Block: Not Supported 00:27:30.317 Replay Protected Memory Block: Not Supported 00:27:30.317 00:27:30.317 Firmware Slot Information 00:27:30.317 ========================= 00:27:30.317 Active slot: 1 00:27:30.317 Slot 1 Firmware Revision: 1.0 00:27:30.317 00:27:30.317 00:27:30.317 Commands Supported and Effects 00:27:30.317 ============================== 00:27:30.317 Admin Commands 00:27:30.317 -------------- 00:27:30.317 Delete I/O Submission Queue (00h): Supported 00:27:30.317 Create I/O Submission Queue (01h): Supported 00:27:30.317 Get Log Page (02h): Supported 00:27:30.317 Delete I/O Completion Queue (04h): Supported 00:27:30.317 Create I/O Completion Queue (05h): Supported 00:27:30.317 Identify (06h): Supported 00:27:30.317 Abort (08h): Supported 00:27:30.317 Set Features (09h): Supported 00:27:30.317 Get Features (0Ah): Supported 00:27:30.317 Asynchronous Event Request (0Ch): Supported 00:27:30.317 Namespace Attachment (15h): Supported NS-Inventory-Change 00:27:30.317 Directive Send (19h): Supported 00:27:30.318 Directive Receive (1Ah): Supported 00:27:30.318 Virtualization Management (1Ch): Supported 00:27:30.318 Doorbell Buffer Config (7Ch): Supported 00:27:30.318 Format NVM (80h): Supported LBA-Change 00:27:30.318 I/O Commands 00:27:30.318 ------------ 00:27:30.318 Flush (00h): Supported LBA-Change 00:27:30.318 Write (01h): Supported LBA-Change 00:27:30.318 Read (02h): Supported 00:27:30.318 Compare (05h): Supported 00:27:30.318 Write Zeroes (08h): Supported LBA-Change 00:27:30.318 Dataset Management (09h): Supported LBA-Change 00:27:30.318 Unknown (0Ch): Supported 00:27:30.318 Unknown (12h): Supported 00:27:30.318 Copy (19h): Supported LBA-Change 00:27:30.318 Unknown (1Dh): Supported LBA-Change 00:27:30.318 00:27:30.318 Error Log 00:27:30.318 ========= 00:27:30.318 00:27:30.318 Arbitration 00:27:30.318 =========== 00:27:30.318 Arbitration Burst: no limit 00:27:30.318 00:27:30.318 Power Management 00:27:30.318 ================ 00:27:30.318 Number of Power States: 1 00:27:30.318 Current Power State: Power State #0 00:27:30.318 Power State #0: 00:27:30.318 Max Power: 25.00 W 00:27:30.318 Non-Operational State: Operational 00:27:30.318 Entry Latency: 16 microseconds 00:27:30.318 Exit Latency: 4 microseconds 00:27:30.318 Relative Read Throughput: 0 00:27:30.318 Relative Read Latency: 0 00:27:30.318 Relative Write Throughput: 0 00:27:30.318 Relative Write Latency: 0 00:27:30.318 Idle Power[2024-07-15 07:36:08.751580] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 69830 terminated unexpected 00:27:30.318 : Not Reported 00:27:30.318 Active Power: Not Reported 00:27:30.318 Non-Operational Permissive Mode: Not Supported 00:27:30.318 00:27:30.318 Health Information 00:27:30.318 ================== 00:27:30.318 Critical Warnings: 00:27:30.318 Available Spare Space: OK 00:27:30.318 Temperature: OK 00:27:30.318 Device Reliability: OK 00:27:30.318 Read Only: No 00:27:30.318 Volatile Memory Backup: OK 00:27:30.318 Current Temperature: 323 Kelvin (50 Celsius) 00:27:30.318 Temperature Threshold: 343 Kelvin (70 Celsius) 00:27:30.318 Available Spare: 0% 00:27:30.318 Available Spare Threshold: 0% 00:27:30.318 Life Percentage Used: 0% 00:27:30.318 Data Units Read: 1068 00:27:30.318 Data Units Written: 905 00:27:30.318 Host Read Commands: 45319 00:27:30.318 Host Write Commands: 43861 00:27:30.318 Controller Busy Time: 0 minutes 00:27:30.318 Power Cycles: 0 00:27:30.318 Power On Hours: 0 hours 00:27:30.318 Unsafe Shutdowns: 0 00:27:30.318 Unrecoverable Media Errors: 0 00:27:30.318 Lifetime Error Log Entries: 0 00:27:30.318 Warning Temperature Time: 0 minutes 00:27:30.318 Critical Temperature Time: 0 minutes 00:27:30.318 00:27:30.318 Number of Queues 00:27:30.318 ================ 00:27:30.318 Number of I/O Submission Queues: 64 00:27:30.318 Number of I/O Completion Queues: 64 00:27:30.318 00:27:30.318 ZNS Specific Controller Data 00:27:30.318 ============================ 00:27:30.318 Zone Append Size Limit: 0 00:27:30.318 00:27:30.318 00:27:30.318 Active Namespaces 00:27:30.318 ================= 00:27:30.318 Namespace ID:1 00:27:30.318 Error Recovery Timeout: Unlimited 00:27:30.318 Command Set Identifier: NVM (00h) 00:27:30.318 Deallocate: Supported 00:27:30.318 Deallocated/Unwritten Error: Supported 00:27:30.318 Deallocated Read Value: All 0x00 00:27:30.318 Deallocate in Write Zeroes: Not Supported 00:27:30.318 Deallocated Guard Field: 0xFFFF 00:27:30.318 Flush: Supported 00:27:30.318 Reservation: Not Supported 00:27:30.318 Metadata Transferred as: Separate Metadata Buffer 00:27:30.318 Namespace Sharing Capabilities: Private 00:27:30.318 Size (in LBAs): 1548666 (5GiB) 00:27:30.318 Capacity (in LBAs): 1548666 (5GiB) 00:27:30.318 Utilization (in LBAs): 1548666 (5GiB) 00:27:30.318 Thin Provisioning: Not Supported 00:27:30.318 Per-NS Atomic Units: No 00:27:30.318 Maximum Single Source Range Length: 128 00:27:30.318 Maximum Copy Length: 128 00:27:30.318 Maximum Source Range Count: 128 00:27:30.318 NGUID/EUI64 Never Reused: No 00:27:30.318 Namespace Write Protected: No 00:27:30.318 Number of LBA Formats: 8 00:27:30.318 Current LBA Format: LBA Format #07 00:27:30.318 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:30.318 LBA Format #01: Data Size: 512 Metadata Size: 8 00:27:30.318 LBA Format #02: Data Size: 512 Metadata Size: 16 00:27:30.318 LBA Format #03: Data Size: 512 Metadata Size: 64 00:27:30.318 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:27:30.318 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:27:30.318 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:27:30.318 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:27:30.318 00:27:30.318 NVM Specific Namespace Data 00:27:30.318 =========================== 00:27:30.318 Logical Block Storage Tag Mask: 0 00:27:30.318 Protection Information Capabilities: 00:27:30.318 16b Guard Protection Information Storage Tag Support: No 00:27:30.318 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:27:30.318 Storage Tag Check Read Support: No 00:27:30.318 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.318 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.318 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.318 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.318 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.318 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.318 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.318 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.318 ===================================================== 00:27:30.318 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:27:30.318 ===================================================== 00:27:30.318 Controller Capabilities/Features 00:27:30.318 ================================ 00:27:30.318 Vendor ID: 1b36 00:27:30.318 Subsystem Vendor ID: 1af4 00:27:30.318 Serial Number: 12341 00:27:30.318 Model Number: QEMU NVMe Ctrl 00:27:30.318 Firmware Version: 8.0.0 00:27:30.318 Recommended Arb Burst: 6 00:27:30.318 IEEE OUI Identifier: 00 54 52 00:27:30.318 Multi-path I/O 00:27:30.318 May have multiple subsystem ports: No 00:27:30.318 May have multiple controllers: No 00:27:30.318 Associated with SR-IOV VF: No 00:27:30.318 Max Data Transfer Size: 524288 00:27:30.318 Max Number of Namespaces: 256 00:27:30.318 Max Number of I/O Queues: 64 00:27:30.318 NVMe Specification Version (VS): 1.4 00:27:30.318 NVMe Specification Version (Identify): 1.4 00:27:30.318 Maximum Queue Entries: 2048 00:27:30.318 Contiguous Queues Required: Yes 00:27:30.318 Arbitration Mechanisms Supported 00:27:30.318 Weighted Round Robin: Not Supported 00:27:30.318 Vendor Specific: Not Supported 00:27:30.318 Reset Timeout: 7500 ms 00:27:30.318 Doorbell Stride: 4 bytes 00:27:30.318 NVM Subsystem Reset: Not Supported 00:27:30.318 Command Sets Supported 00:27:30.318 NVM Command Set: Supported 00:27:30.318 Boot Partition: Not Supported 00:27:30.318 Memory Page Size Minimum: 4096 bytes 00:27:30.318 Memory Page Size Maximum: 65536 bytes 00:27:30.318 Persistent Memory Region: Not Supported 00:27:30.318 Optional Asynchronous Events Supported 00:27:30.318 Namespace Attribute Notices: Supported 00:27:30.318 Firmware Activation Notices: Not Supported 00:27:30.318 ANA Change Notices: Not Supported 00:27:30.318 PLE Aggregate Log Change Notices: Not Supported 00:27:30.318 LBA Status Info Alert Notices: Not Supported 00:27:30.318 EGE Aggregate Log Change Notices: Not Supported 00:27:30.318 Normal NVM Subsystem Shutdown event: Not Supported 00:27:30.318 Zone Descriptor Change Notices: Not Supported 00:27:30.318 Discovery Log Change Notices: Not Supported 00:27:30.318 Controller Attributes 00:27:30.318 128-bit Host Identifier: Not Supported 00:27:30.318 Non-Operational Permissive Mode: Not Supported 00:27:30.318 NVM Sets: Not Supported 00:27:30.318 Read Recovery Levels: Not Supported 00:27:30.318 Endurance Groups: Not Supported 00:27:30.318 Predictable Latency Mode: Not Supported 00:27:30.318 Traffic Based Keep ALive: Not Supported 00:27:30.318 Namespace Granularity: Not Supported 00:27:30.318 SQ Associations: Not Supported 00:27:30.318 UUID List: Not Supported 00:27:30.318 Multi-Domain Subsystem: Not Supported 00:27:30.318 Fixed Capacity Management: Not Supported 00:27:30.318 Variable Capacity Management: Not Supported 00:27:30.318 Delete Endurance Group: Not Supported 00:27:30.318 Delete NVM Set: Not Supported 00:27:30.318 Extended LBA Formats Supported: Supported 00:27:30.318 Flexible Data Placement Supported: Not Supported 00:27:30.318 00:27:30.318 Controller Memory Buffer Support 00:27:30.318 ================================ 00:27:30.318 Supported: No 00:27:30.318 00:27:30.318 Persistent Memory Region Support 00:27:30.318 ================================ 00:27:30.318 Supported: No 00:27:30.318 00:27:30.318 Admin Command Set Attributes 00:27:30.318 ============================ 00:27:30.318 Security Send/Receive: Not Supported 00:27:30.318 Format NVM: Supported 00:27:30.318 Firmware Activate/Download: Not Supported 00:27:30.318 Namespace Management: Supported 00:27:30.318 Device Self-Test: Not Supported 00:27:30.318 Directives: Supported 00:27:30.319 NVMe-MI: Not Supported 00:27:30.319 Virtualization Management: Not Supported 00:27:30.319 Doorbell Buffer Config: Supported 00:27:30.319 Get LBA Status Capability: Not Supported 00:27:30.319 Command & Feature Lockdown Capability: Not Supported 00:27:30.319 Abort Command Limit: 4 00:27:30.319 Async Event Request Limit: 4 00:27:30.319 Number of Firmware Slots: N/A 00:27:30.319 Firmware Slot 1 Read-Only: N/A 00:27:30.319 Firmware Activation Without Reset: N/A 00:27:30.319 Multiple Update Detection Support: N/A 00:27:30.319 Firmware Update Granularity: No Information Provided 00:27:30.319 Per-Namespace SMART Log: Yes 00:27:30.319 Asymmetric Namespace Access Log Page: Not Supported 00:27:30.319 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:27:30.319 Command Effects Log Page: Supported 00:27:30.319 Get Log Page Extended Data: Supported 00:27:30.319 Telemetry Log Pages: Not Supported 00:27:30.319 Persistent Event Log Pages: Not Supported 00:27:30.319 Supported Log Pages Log Page: May Support 00:27:30.319 Commands Supported & Effects Log Page: Not Supported 00:27:30.319 Feature Identifiers & Effects Log Page:May Support 00:27:30.319 NVMe-MI Commands & Effects Log Page: May Support 00:27:30.319 Data Area 4 for Telemetry Log: Not Supported 00:27:30.319 Error Log Page Entries Supported: 1 00:27:30.319 Keep Alive: Not Supported 00:27:30.319 00:27:30.319 NVM Command Set Attributes 00:27:30.319 ========================== 00:27:30.319 Submission Queue Entry Size 00:27:30.319 Max: 64 00:27:30.319 Min: 64 00:27:30.319 Completion Queue Entry Size 00:27:30.319 Max: 16 00:27:30.319 Min: 16 00:27:30.319 Number of Namespaces: 256 00:27:30.319 Compare Command: Supported 00:27:30.319 Write Uncorrectable Command: Not Supported 00:27:30.319 Dataset Management Command: Supported 00:27:30.319 Write Zeroes Command: Supported 00:27:30.319 Set Features Save Field: Supported 00:27:30.319 Reservations: Not Supported 00:27:30.319 Timestamp: Supported 00:27:30.319 Copy: Supported 00:27:30.319 Volatile Write Cache: Present 00:27:30.319 Atomic Write Unit (Normal): 1 00:27:30.319 Atomic Write Unit (PFail): 1 00:27:30.319 Atomic Compare & Write Unit: 1 00:27:30.319 Fused Compare & Write: Not Supported 00:27:30.319 Scatter-Gather List 00:27:30.319 SGL Command Set: Supported 00:27:30.319 SGL Keyed: Not Supported 00:27:30.319 SGL Bit Bucket Descriptor: Not Supported 00:27:30.319 SGL Metadata Pointer: Not Supported 00:27:30.319 Oversized SGL: Not Supported 00:27:30.319 SGL Metadata Address: Not Supported 00:27:30.319 SGL Offset: Not Supported 00:27:30.319 Transport SGL Data Block: Not Supported 00:27:30.319 Replay Protected Memory Block: Not Supported 00:27:30.319 00:27:30.319 Firmware Slot Information 00:27:30.319 ========================= 00:27:30.319 Active slot: 1 00:27:30.319 Slot 1 Firmware Revision: 1.0 00:27:30.319 00:27:30.319 00:27:30.319 Commands Supported and Effects 00:27:30.319 ============================== 00:27:30.319 Admin Commands 00:27:30.319 -------------- 00:27:30.319 Delete I/O Submission Queue (00h): Supported 00:27:30.319 Create I/O Submission Queue (01h): Supported 00:27:30.319 Get Log Page (02h): Supported 00:27:30.319 Delete I/O Completion Queue (04h): Supported 00:27:30.319 Create I/O Completion Queue (05h): Supported 00:27:30.319 Identify (06h): Supported 00:27:30.319 Abort (08h): Supported 00:27:30.319 Set Features (09h): Supported 00:27:30.319 Get Features (0Ah): Supported 00:27:30.319 Asynchronous Event Request (0Ch): Supported 00:27:30.319 Namespace Attachment (15h): Supported NS-Inventory-Change 00:27:30.319 Directive Send (19h): Supported 00:27:30.319 Directive Receive (1Ah): Supported 00:27:30.319 Virtualization Management (1Ch): Supported 00:27:30.319 Doorbell Buffer Config (7Ch): Supported 00:27:30.319 Format NVM (80h): Supported LBA-Change 00:27:30.319 I/O Commands 00:27:30.319 ------------ 00:27:30.319 Flush (00h): Supported LBA-Change 00:27:30.319 Write (01h): Supported LBA-Change 00:27:30.319 Read (02h): Supported 00:27:30.319 Compare (05h): Supported 00:27:30.319 Write Zeroes (08h): Supported LBA-Change 00:27:30.319 Dataset Management (09h): Supported LBA-Change 00:27:30.319 Unknown (0Ch): Supported 00:27:30.319 Unknown (12h): Supported 00:27:30.319 Copy (19h): Supported LBA-Change 00:27:30.319 Unknown (1Dh): Supported LBA-Change 00:27:30.319 00:27:30.319 Error Log 00:27:30.319 ========= 00:27:30.319 00:27:30.319 Arbitration 00:27:30.319 =========== 00:27:30.319 Arbitration Burst: no limit 00:27:30.319 00:27:30.319 Power Management 00:27:30.319 ================ 00:27:30.319 Number of Power States: 1 00:27:30.319 Current Power State: Power State #0 00:27:30.319 Power State #0: 00:27:30.319 Max Power: 25.00 W 00:27:30.319 Non-Operational State: Operational 00:27:30.319 Entry Latency: 16 microseconds 00:27:30.319 Exit Latency: 4 microseconds 00:27:30.319 Relative Read Throughput: 0 00:27:30.319 Relative Read Latency: 0 00:27:30.319 Relative Write Throughput: 0 00:27:30.319 Relative Write Latency: 0 00:27:30.319 Idle Power: Not Reported 00:27:30.319 Active Power: Not Reported 00:27:30.319 Non-Operational Permissive Mode: Not Supported 00:27:30.319 00:27:30.319 Health Information 00:27:30.319 ================== 00:27:30.319 Critical Warnings: 00:27:30.319 Available Spare Space: OK 00:27:30.319 Temperature: [2024-07-15 07:36:08.752475] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 69830 terminated unexpected 00:27:30.319 OK 00:27:30.319 Device Reliability: OK 00:27:30.319 Read Only: No 00:27:30.319 Volatile Memory Backup: OK 00:27:30.319 Current Temperature: 323 Kelvin (50 Celsius) 00:27:30.319 Temperature Threshold: 343 Kelvin (70 Celsius) 00:27:30.319 Available Spare: 0% 00:27:30.319 Available Spare Threshold: 0% 00:27:30.319 Life Percentage Used: 0% 00:27:30.319 Data Units Read: 770 00:27:30.319 Data Units Written: 621 00:27:30.319 Host Read Commands: 32441 00:27:30.319 Host Write Commands: 30218 00:27:30.319 Controller Busy Time: 0 minutes 00:27:30.319 Power Cycles: 0 00:27:30.319 Power On Hours: 0 hours 00:27:30.319 Unsafe Shutdowns: 0 00:27:30.319 Unrecoverable Media Errors: 0 00:27:30.319 Lifetime Error Log Entries: 0 00:27:30.319 Warning Temperature Time: 0 minutes 00:27:30.319 Critical Temperature Time: 0 minutes 00:27:30.319 00:27:30.319 Number of Queues 00:27:30.319 ================ 00:27:30.319 Number of I/O Submission Queues: 64 00:27:30.319 Number of I/O Completion Queues: 64 00:27:30.319 00:27:30.319 ZNS Specific Controller Data 00:27:30.319 ============================ 00:27:30.319 Zone Append Size Limit: 0 00:27:30.319 00:27:30.319 00:27:30.319 Active Namespaces 00:27:30.319 ================= 00:27:30.319 Namespace ID:1 00:27:30.319 Error Recovery Timeout: Unlimited 00:27:30.319 Command Set Identifier: NVM (00h) 00:27:30.319 Deallocate: Supported 00:27:30.319 Deallocated/Unwritten Error: Supported 00:27:30.319 Deallocated Read Value: All 0x00 00:27:30.319 Deallocate in Write Zeroes: Not Supported 00:27:30.319 Deallocated Guard Field: 0xFFFF 00:27:30.319 Flush: Supported 00:27:30.319 Reservation: Not Supported 00:27:30.319 Namespace Sharing Capabilities: Private 00:27:30.319 Size (in LBAs): 1310720 (5GiB) 00:27:30.319 Capacity (in LBAs): 1310720 (5GiB) 00:27:30.319 Utilization (in LBAs): 1310720 (5GiB) 00:27:30.319 Thin Provisioning: Not Supported 00:27:30.319 Per-NS Atomic Units: No 00:27:30.319 Maximum Single Source Range Length: 128 00:27:30.319 Maximum Copy Length: 128 00:27:30.319 Maximum Source Range Count: 128 00:27:30.319 NGUID/EUI64 Never Reused: No 00:27:30.319 Namespace Write Protected: No 00:27:30.319 Number of LBA Formats: 8 00:27:30.319 Current LBA Format: LBA Format #04 00:27:30.319 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:30.319 LBA Format #01: Data Size: 512 Metadata Size: 8 00:27:30.319 LBA Format #02: Data Size: 512 Metadata Size: 16 00:27:30.319 LBA Format #03: Data Size: 512 Metadata Size: 64 00:27:30.319 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:27:30.319 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:27:30.319 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:27:30.319 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:27:30.319 00:27:30.319 NVM Specific Namespace Data 00:27:30.319 =========================== 00:27:30.319 Logical Block Storage Tag Mask: 0 00:27:30.319 Protection Information Capabilities: 00:27:30.319 16b Guard Protection Information Storage Tag Support: No 00:27:30.319 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:27:30.319 Storage Tag Check Read Support: No 00:27:30.319 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.319 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.319 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.319 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.319 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.319 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.319 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.319 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.319 ===================================================== 00:27:30.319 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:27:30.320 ===================================================== 00:27:30.320 Controller Capabilities/Features 00:27:30.320 ================================ 00:27:30.320 Vendor ID: 1b36 00:27:30.320 Subsystem Vendor ID: 1af4 00:27:30.320 Serial Number: 12343 00:27:30.320 Model Number: QEMU NVMe Ctrl 00:27:30.320 Firmware Version: 8.0.0 00:27:30.320 Recommended Arb Burst: 6 00:27:30.320 IEEE OUI Identifier: 00 54 52 00:27:30.320 Multi-path I/O 00:27:30.320 May have multiple subsystem ports: No 00:27:30.320 May have multiple controllers: Yes 00:27:30.320 Associated with SR-IOV VF: No 00:27:30.320 Max Data Transfer Size: 524288 00:27:30.320 Max Number of Namespaces: 256 00:27:30.320 Max Number of I/O Queues: 64 00:27:30.320 NVMe Specification Version (VS): 1.4 00:27:30.320 NVMe Specification Version (Identify): 1.4 00:27:30.320 Maximum Queue Entries: 2048 00:27:30.320 Contiguous Queues Required: Yes 00:27:30.320 Arbitration Mechanisms Supported 00:27:30.320 Weighted Round Robin: Not Supported 00:27:30.320 Vendor Specific: Not Supported 00:27:30.320 Reset Timeout: 7500 ms 00:27:30.320 Doorbell Stride: 4 bytes 00:27:30.320 NVM Subsystem Reset: Not Supported 00:27:30.320 Command Sets Supported 00:27:30.320 NVM Command Set: Supported 00:27:30.320 Boot Partition: Not Supported 00:27:30.320 Memory Page Size Minimum: 4096 bytes 00:27:30.320 Memory Page Size Maximum: 65536 bytes 00:27:30.320 Persistent Memory Region: Not Supported 00:27:30.320 Optional Asynchronous Events Supported 00:27:30.320 Namespace Attribute Notices: Supported 00:27:30.320 Firmware Activation Notices: Not Supported 00:27:30.320 ANA Change Notices: Not Supported 00:27:30.320 PLE Aggregate Log Change Notices: Not Supported 00:27:30.320 LBA Status Info Alert Notices: Not Supported 00:27:30.320 EGE Aggregate Log Change Notices: Not Supported 00:27:30.320 Normal NVM Subsystem Shutdown event: Not Supported 00:27:30.320 Zone Descriptor Change Notices: Not Supported 00:27:30.320 Discovery Log Change Notices: Not Supported 00:27:30.320 Controller Attributes 00:27:30.320 128-bit Host Identifier: Not Supported 00:27:30.320 Non-Operational Permissive Mode: Not Supported 00:27:30.320 NVM Sets: Not Supported 00:27:30.320 Read Recovery Levels: Not Supported 00:27:30.320 Endurance Groups: Supported 00:27:30.320 Predictable Latency Mode: Not Supported 00:27:30.320 Traffic Based Keep ALive: Not Supported 00:27:30.320 Namespace Granularity: Not Supported 00:27:30.320 SQ Associations: Not Supported 00:27:30.320 UUID List: Not Supported 00:27:30.320 Multi-Domain Subsystem: Not Supported 00:27:30.320 Fixed Capacity Management: Not Supported 00:27:30.320 Variable Capacity Management: Not Supported 00:27:30.320 Delete Endurance Group: Not Supported 00:27:30.320 Delete NVM Set: Not Supported 00:27:30.320 Extended LBA Formats Supported: Supported 00:27:30.320 Flexible Data Placement Supported: Supported 00:27:30.320 00:27:30.320 Controller Memory Buffer Support 00:27:30.320 ================================ 00:27:30.320 Supported: No 00:27:30.320 00:27:30.320 Persistent Memory Region Support 00:27:30.320 ================================ 00:27:30.320 Supported: No 00:27:30.320 00:27:30.320 Admin Command Set Attributes 00:27:30.320 ============================ 00:27:30.320 Security Send/Receive: Not Supported 00:27:30.320 Format NVM: Supported 00:27:30.320 Firmware Activate/Download: Not Supported 00:27:30.320 Namespace Management: Supported 00:27:30.320 Device Self-Test: Not Supported 00:27:30.320 Directives: Supported 00:27:30.320 NVMe-MI: Not Supported 00:27:30.320 Virtualization Management: Not Supported 00:27:30.320 Doorbell Buffer Config: Supported 00:27:30.320 Get LBA Status Capability: Not Supported 00:27:30.320 Command & Feature Lockdown Capability: Not Supported 00:27:30.320 Abort Command Limit: 4 00:27:30.320 Async Event Request Limit: 4 00:27:30.320 Number of Firmware Slots: N/A 00:27:30.320 Firmware Slot 1 Read-Only: N/A 00:27:30.320 Firmware Activation Without Reset: N/A 00:27:30.320 Multiple Update Detection Support: N/A 00:27:30.320 Firmware Update Granularity: No Information Provided 00:27:30.320 Per-Namespace SMART Log: Yes 00:27:30.320 Asymmetric Namespace Access Log Page: Not Supported 00:27:30.320 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:27:30.320 Command Effects Log Page: Supported 00:27:30.320 Get Log Page Extended Data: Supported 00:27:30.320 Telemetry Log Pages: Not Supported 00:27:30.320 Persistent Event Log Pages: Not Supported 00:27:30.320 Supported Log Pages Log Page: May Support 00:27:30.320 Commands Supported & Effects Log Page: Not Supported 00:27:30.320 Feature Identifiers & Effects Log Page:May Support 00:27:30.320 NVMe-MI Commands & Effects Log Page: May Support 00:27:30.320 Data Area 4 for Telemetry Log: Not Supported 00:27:30.320 Error Log Page Entries Supported: 1 00:27:30.320 Keep Alive: Not Supported 00:27:30.320 00:27:30.320 NVM Command Set Attributes 00:27:30.320 ========================== 00:27:30.320 Submission Queue Entry Size 00:27:30.320 Max: 64 00:27:30.320 Min: 64 00:27:30.320 Completion Queue Entry Size 00:27:30.320 Max: 16 00:27:30.320 Min: 16 00:27:30.320 Number of Namespaces: 256 00:27:30.320 Compare Command: Supported 00:27:30.320 Write Uncorrectable Command: Not Supported 00:27:30.320 Dataset Management Command: Supported 00:27:30.320 Write Zeroes Command: Supported 00:27:30.320 Set Features Save Field: Supported 00:27:30.320 Reservations: Not Supported 00:27:30.320 Timestamp: Supported 00:27:30.320 Copy: Supported 00:27:30.320 Volatile Write Cache: Present 00:27:30.320 Atomic Write Unit (Normal): 1 00:27:30.320 Atomic Write Unit (PFail): 1 00:27:30.320 Atomic Compare & Write Unit: 1 00:27:30.320 Fused Compare & Write: Not Supported 00:27:30.320 Scatter-Gather List 00:27:30.320 SGL Command Set: Supported 00:27:30.320 SGL Keyed: Not Supported 00:27:30.320 SGL Bit Bucket Descriptor: Not Supported 00:27:30.320 SGL Metadata Pointer: Not Supported 00:27:30.320 Oversized SGL: Not Supported 00:27:30.320 SGL Metadata Address: Not Supported 00:27:30.320 SGL Offset: Not Supported 00:27:30.320 Transport SGL Data Block: Not Supported 00:27:30.320 Replay Protected Memory Block: Not Supported 00:27:30.320 00:27:30.320 Firmware Slot Information 00:27:30.320 ========================= 00:27:30.320 Active slot: 1 00:27:30.320 Slot 1 Firmware Revision: 1.0 00:27:30.320 00:27:30.320 00:27:30.320 Commands Supported and Effects 00:27:30.320 ============================== 00:27:30.320 Admin Commands 00:27:30.320 -------------- 00:27:30.320 Delete I/O Submission Queue (00h): Supported 00:27:30.320 Create I/O Submission Queue (01h): Supported 00:27:30.320 Get Log Page (02h): Supported 00:27:30.320 Delete I/O Completion Queue (04h): Supported 00:27:30.320 Create I/O Completion Queue (05h): Supported 00:27:30.320 Identify (06h): Supported 00:27:30.320 Abort (08h): Supported 00:27:30.320 Set Features (09h): Supported 00:27:30.320 Get Features (0Ah): Supported 00:27:30.320 Asynchronous Event Request (0Ch): Supported 00:27:30.320 Namespace Attachment (15h): Supported NS-Inventory-Change 00:27:30.320 Directive Send (19h): Supported 00:27:30.320 Directive Receive (1Ah): Supported 00:27:30.320 Virtualization Management (1Ch): Supported 00:27:30.320 Doorbell Buffer Config (7Ch): Supported 00:27:30.320 Format NVM (80h): Supported LBA-Change 00:27:30.320 I/O Commands 00:27:30.320 ------------ 00:27:30.320 Flush (00h): Supported LBA-Change 00:27:30.320 Write (01h): Supported LBA-Change 00:27:30.320 Read (02h): Supported 00:27:30.320 Compare (05h): Supported 00:27:30.320 Write Zeroes (08h): Supported LBA-Change 00:27:30.320 Dataset Management (09h): Supported LBA-Change 00:27:30.320 Unknown (0Ch): Supported 00:27:30.320 Unknown (12h): Supported 00:27:30.320 Copy (19h): Supported LBA-Change 00:27:30.320 Unknown (1Dh): Supported LBA-Change 00:27:30.320 00:27:30.320 Error Log 00:27:30.320 ========= 00:27:30.320 00:27:30.320 Arbitration 00:27:30.320 =========== 00:27:30.320 Arbitration Burst: no limit 00:27:30.320 00:27:30.320 Power Management 00:27:30.320 ================ 00:27:30.320 Number of Power States: 1 00:27:30.320 Current Power State: Power State #0 00:27:30.320 Power State #0: 00:27:30.320 Max Power: 25.00 W 00:27:30.320 Non-Operational State: Operational 00:27:30.320 Entry Latency: 16 microseconds 00:27:30.320 Exit Latency: 4 microseconds 00:27:30.320 Relative Read Throughput: 0 00:27:30.320 Relative Read Latency: 0 00:27:30.320 Relative Write Throughput: 0 00:27:30.320 Relative Write Latency: 0 00:27:30.320 Idle Power: Not Reported 00:27:30.320 Active Power: Not Reported 00:27:30.320 Non-Operational Permissive Mode: Not Supported 00:27:30.320 00:27:30.320 Health Information 00:27:30.320 ================== 00:27:30.320 Critical Warnings: 00:27:30.320 Available Spare Space: OK 00:27:30.320 Temperature: OK 00:27:30.320 Device Reliability: OK 00:27:30.320 Read Only: No 00:27:30.320 Volatile Memory Backup: OK 00:27:30.320 Current Temperature: 323 Kelvin (50 Celsius) 00:27:30.320 Temperature Threshold: 343 Kelvin (70 Celsius) 00:27:30.320 Available Spare: 0% 00:27:30.320 Available Spare Threshold: 0% 00:27:30.320 Life Percentage Used: 0% 00:27:30.320 Data Units Read: 826 00:27:30.321 Data Units Written: 720 00:27:30.321 Host Read Commands: 32466 00:27:30.321 Host Write Commands: 31056 00:27:30.321 Controller Busy Time: 0 minutes 00:27:30.321 Power Cycles: 0 00:27:30.321 Power On Hours: 0 hours 00:27:30.321 Unsafe Shutdowns: 0 00:27:30.321 Unrecoverable Media Errors: 0 00:27:30.321 Lifetime Error Log Entries: 0 00:27:30.321 Warning Temperature Time: 0 minutes 00:27:30.321 Critical Temperature Time: 0 minutes 00:27:30.321 00:27:30.321 Number of Queues 00:27:30.321 ================ 00:27:30.321 Number of I/O Submission Queues: 64 00:27:30.321 Number of I/O Completion Queues: 64 00:27:30.321 00:27:30.321 ZNS Specific Controller Data 00:27:30.321 ============================ 00:27:30.321 Zone Append Size Limit: 0 00:27:30.321 00:27:30.321 00:27:30.321 Active Namespaces 00:27:30.321 ================= 00:27:30.321 Namespace ID:1 00:27:30.321 Error Recovery Timeout: Unlimited 00:27:30.321 Command Set Identifier: NVM (00h) 00:27:30.321 Deallocate: Supported 00:27:30.321 Deallocated/Unwritten Error: Supported 00:27:30.321 Deallocated Read Value: All 0x00 00:27:30.321 Deallocate in Write Zeroes: Not Supported 00:27:30.321 Deallocated Guard Field: 0xFFFF 00:27:30.321 Flush: Supported 00:27:30.321 Reservation: Not Supported 00:27:30.321 Namespace Sharing Capabilities: Multiple Controllers 00:27:30.321 Size (in LBAs): 262144 (1GiB) 00:27:30.321 Capacity (in LBAs): 262144 (1GiB) 00:27:30.321 Utilization (in LBAs): 262144 (1GiB) 00:27:30.321 Thin Provisioning: Not Supported 00:27:30.321 Per-NS Atomic Units: No 00:27:30.321 Maximum Single Source Range Length: 128 00:27:30.321 Maximum Copy Length: 128 00:27:30.321 Maximum Source Range Count: 128 00:27:30.321 NGUID/EUI64 Never Reused: No 00:27:30.321 Namespace Write Protected: No 00:27:30.321 Endurance group ID: 1 00:27:30.321 Number of LBA Formats: 8 00:27:30.321 Current LBA Format: LBA Format #04 00:27:30.321 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:30.321 LBA Format #01: Data Size: 512 Metadata Size: 8 00:27:30.321 LBA Format #02: Data Size: 512 Metadata Size: 16 00:27:30.321 LBA Format #03: Data Size: 512 Metadata Size: 64 00:27:30.321 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:27:30.321 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:27:30.321 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:27:30.321 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:27:30.321 00:27:30.321 Get Feature FDP: 00:27:30.321 ================ 00:27:30.321 Enabled: Yes 00:27:30.321 FDP configuration index: 0 00:27:30.321 00:27:30.321 FDP configurations log page 00:27:30.321 =========================== 00:27:30.321 Number of FDP configurations: 1 00:27:30.321 Version: 0 00:27:30.321 Size: 112 00:27:30.321 FDP Configuration Descriptor: 0 00:27:30.321 Descriptor Size: 96 00:27:30.321 Reclaim Group Identifier format: 2 00:27:30.321 FDP Volatile Write Cache: Not Present 00:27:30.321 FDP Configuration: Valid 00:27:30.321 Vendor Specific Size: 0 00:27:30.321 Number of Reclaim Groups: 2 00:27:30.321 Number of Recalim Unit Handles: 8 00:27:30.321 Max Placement Identifiers: 128 00:27:30.321 Number of Namespaces Suppprted: 256 00:27:30.321 Reclaim unit Nominal Size: 6000000 bytes 00:27:30.321 Estimated Reclaim Unit Time Limit: Not Reported 00:27:30.321 RUH Desc #000: RUH Type: Initially Isolated 00:27:30.321 RUH Desc #001: RUH Type: Initially Isolated 00:27:30.321 RUH Desc #002: RUH Type: Initially Isolated 00:27:30.321 RUH Desc #003: RUH Type: Initially Isolated 00:27:30.321 RUH Desc #004: RUH Type: Initially Isolated 00:27:30.321 RUH Desc #005: RUH Type: Initially Isolated 00:27:30.321 RUH Desc #006: RUH Type: Initially Isolated 00:27:30.321 RUH Desc #007: RUH Type: Initially Isolated 00:27:30.321 00:27:30.321 FDP reclaim unit handle usage log page 00:27:30.321 ====================================== 00:27:30.321 Number of Reclaim Unit Handles: 8 00:27:30.321 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:27:30.321 RUH Usage Desc #001: RUH Attributes: Unused 00:27:30.321 RUH Usage Desc #002: RUH Attributes: Unused 00:27:30.321 RUH Usage Desc #003: RUH Attributes: Unused 00:27:30.321 RUH Usage Desc #004: RUH Attributes: Unused 00:27:30.321 RUH Usage Desc #005: RUH Attributes: Unused 00:27:30.321 RUH Usage Desc #006: RUH Attributes: Unused 00:27:30.321 RUH Usage Desc #007: RUH Attributes: Unused 00:27:30.321 00:27:30.321 FDP statistics log page 00:27:30.321 ======================= 00:27:30.321 Host bytes with metadata written: 448372736 00:27:30.321 Medi[2024-07-15 07:36:08.754388] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 69830 terminated unexpected 00:27:30.321 a bytes with metadata written: 448438272 00:27:30.321 Media bytes erased: 0 00:27:30.321 00:27:30.321 FDP events log page 00:27:30.321 =================== 00:27:30.321 Number of FDP events: 0 00:27:30.321 00:27:30.321 NVM Specific Namespace Data 00:27:30.321 =========================== 00:27:30.321 Logical Block Storage Tag Mask: 0 00:27:30.321 Protection Information Capabilities: 00:27:30.321 16b Guard Protection Information Storage Tag Support: No 00:27:30.321 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:27:30.321 Storage Tag Check Read Support: No 00:27:30.321 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.321 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.321 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.321 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.321 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.321 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.321 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.321 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.321 ===================================================== 00:27:30.321 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:27:30.321 ===================================================== 00:27:30.321 Controller Capabilities/Features 00:27:30.321 ================================ 00:27:30.321 Vendor ID: 1b36 00:27:30.321 Subsystem Vendor ID: 1af4 00:27:30.321 Serial Number: 12342 00:27:30.321 Model Number: QEMU NVMe Ctrl 00:27:30.321 Firmware Version: 8.0.0 00:27:30.321 Recommended Arb Burst: 6 00:27:30.321 IEEE OUI Identifier: 00 54 52 00:27:30.321 Multi-path I/O 00:27:30.321 May have multiple subsystem ports: No 00:27:30.321 May have multiple controllers: No 00:27:30.321 Associated with SR-IOV VF: No 00:27:30.321 Max Data Transfer Size: 524288 00:27:30.321 Max Number of Namespaces: 256 00:27:30.321 Max Number of I/O Queues: 64 00:27:30.321 NVMe Specification Version (VS): 1.4 00:27:30.321 NVMe Specification Version (Identify): 1.4 00:27:30.321 Maximum Queue Entries: 2048 00:27:30.321 Contiguous Queues Required: Yes 00:27:30.321 Arbitration Mechanisms Supported 00:27:30.321 Weighted Round Robin: Not Supported 00:27:30.321 Vendor Specific: Not Supported 00:27:30.321 Reset Timeout: 7500 ms 00:27:30.321 Doorbell Stride: 4 bytes 00:27:30.321 NVM Subsystem Reset: Not Supported 00:27:30.321 Command Sets Supported 00:27:30.321 NVM Command Set: Supported 00:27:30.321 Boot Partition: Not Supported 00:27:30.321 Memory Page Size Minimum: 4096 bytes 00:27:30.321 Memory Page Size Maximum: 65536 bytes 00:27:30.321 Persistent Memory Region: Not Supported 00:27:30.321 Optional Asynchronous Events Supported 00:27:30.321 Namespace Attribute Notices: Supported 00:27:30.321 Firmware Activation Notices: Not Supported 00:27:30.321 ANA Change Notices: Not Supported 00:27:30.321 PLE Aggregate Log Change Notices: Not Supported 00:27:30.321 LBA Status Info Alert Notices: Not Supported 00:27:30.322 EGE Aggregate Log Change Notices: Not Supported 00:27:30.322 Normal NVM Subsystem Shutdown event: Not Supported 00:27:30.322 Zone Descriptor Change Notices: Not Supported 00:27:30.322 Discovery Log Change Notices: Not Supported 00:27:30.322 Controller Attributes 00:27:30.322 128-bit Host Identifier: Not Supported 00:27:30.322 Non-Operational Permissive Mode: Not Supported 00:27:30.322 NVM Sets: Not Supported 00:27:30.322 Read Recovery Levels: Not Supported 00:27:30.322 Endurance Groups: Not Supported 00:27:30.322 Predictable Latency Mode: Not Supported 00:27:30.322 Traffic Based Keep ALive: Not Supported 00:27:30.322 Namespace Granularity: Not Supported 00:27:30.322 SQ Associations: Not Supported 00:27:30.322 UUID List: Not Supported 00:27:30.322 Multi-Domain Subsystem: Not Supported 00:27:30.322 Fixed Capacity Management: Not Supported 00:27:30.322 Variable Capacity Management: Not Supported 00:27:30.322 Delete Endurance Group: Not Supported 00:27:30.322 Delete NVM Set: Not Supported 00:27:30.322 Extended LBA Formats Supported: Supported 00:27:30.322 Flexible Data Placement Supported: Not Supported 00:27:30.322 00:27:30.322 Controller Memory Buffer Support 00:27:30.322 ================================ 00:27:30.322 Supported: No 00:27:30.322 00:27:30.322 Persistent Memory Region Support 00:27:30.322 ================================ 00:27:30.322 Supported: No 00:27:30.322 00:27:30.322 Admin Command Set Attributes 00:27:30.322 ============================ 00:27:30.322 Security Send/Receive: Not Supported 00:27:30.322 Format NVM: Supported 00:27:30.322 Firmware Activate/Download: Not Supported 00:27:30.322 Namespace Management: Supported 00:27:30.322 Device Self-Test: Not Supported 00:27:30.322 Directives: Supported 00:27:30.322 NVMe-MI: Not Supported 00:27:30.322 Virtualization Management: Not Supported 00:27:30.322 Doorbell Buffer Config: Supported 00:27:30.322 Get LBA Status Capability: Not Supported 00:27:30.322 Command & Feature Lockdown Capability: Not Supported 00:27:30.322 Abort Command Limit: 4 00:27:30.322 Async Event Request Limit: 4 00:27:30.322 Number of Firmware Slots: N/A 00:27:30.322 Firmware Slot 1 Read-Only: N/A 00:27:30.322 Firmware Activation Without Reset: N/A 00:27:30.322 Multiple Update Detection Support: N/A 00:27:30.322 Firmware Update Granularity: No Information Provided 00:27:30.322 Per-Namespace SMART Log: Yes 00:27:30.322 Asymmetric Namespace Access Log Page: Not Supported 00:27:30.322 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:27:30.322 Command Effects Log Page: Supported 00:27:30.322 Get Log Page Extended Data: Supported 00:27:30.322 Telemetry Log Pages: Not Supported 00:27:30.322 Persistent Event Log Pages: Not Supported 00:27:30.322 Supported Log Pages Log Page: May Support 00:27:30.322 Commands Supported & Effects Log Page: Not Supported 00:27:30.322 Feature Identifiers & Effects Log Page:May Support 00:27:30.322 NVMe-MI Commands & Effects Log Page: May Support 00:27:30.322 Data Area 4 for Telemetry Log: Not Supported 00:27:30.322 Error Log Page Entries Supported: 1 00:27:30.322 Keep Alive: Not Supported 00:27:30.322 00:27:30.322 NVM Command Set Attributes 00:27:30.322 ========================== 00:27:30.322 Submission Queue Entry Size 00:27:30.322 Max: 64 00:27:30.322 Min: 64 00:27:30.322 Completion Queue Entry Size 00:27:30.322 Max: 16 00:27:30.322 Min: 16 00:27:30.322 Number of Namespaces: 256 00:27:30.322 Compare Command: Supported 00:27:30.322 Write Uncorrectable Command: Not Supported 00:27:30.322 Dataset Management Command: Supported 00:27:30.322 Write Zeroes Command: Supported 00:27:30.322 Set Features Save Field: Supported 00:27:30.322 Reservations: Not Supported 00:27:30.322 Timestamp: Supported 00:27:30.322 Copy: Supported 00:27:30.322 Volatile Write Cache: Present 00:27:30.322 Atomic Write Unit (Normal): 1 00:27:30.322 Atomic Write Unit (PFail): 1 00:27:30.322 Atomic Compare & Write Unit: 1 00:27:30.322 Fused Compare & Write: Not Supported 00:27:30.322 Scatter-Gather List 00:27:30.322 SGL Command Set: Supported 00:27:30.322 SGL Keyed: Not Supported 00:27:30.322 SGL Bit Bucket Descriptor: Not Supported 00:27:30.322 SGL Metadata Pointer: Not Supported 00:27:30.322 Oversized SGL: Not Supported 00:27:30.322 SGL Metadata Address: Not Supported 00:27:30.322 SGL Offset: Not Supported 00:27:30.322 Transport SGL Data Block: Not Supported 00:27:30.322 Replay Protected Memory Block: Not Supported 00:27:30.322 00:27:30.322 Firmware Slot Information 00:27:30.322 ========================= 00:27:30.322 Active slot: 1 00:27:30.322 Slot 1 Firmware Revision: 1.0 00:27:30.322 00:27:30.322 00:27:30.322 Commands Supported and Effects 00:27:30.322 ============================== 00:27:30.322 Admin Commands 00:27:30.322 -------------- 00:27:30.322 Delete I/O Submission Queue (00h): Supported 00:27:30.322 Create I/O Submission Queue (01h): Supported 00:27:30.322 Get Log Page (02h): Supported 00:27:30.322 Delete I/O Completion Queue (04h): Supported 00:27:30.322 Create I/O Completion Queue (05h): Supported 00:27:30.322 Identify (06h): Supported 00:27:30.322 Abort (08h): Supported 00:27:30.322 Set Features (09h): Supported 00:27:30.322 Get Features (0Ah): Supported 00:27:30.322 Asynchronous Event Request (0Ch): Supported 00:27:30.322 Namespace Attachment (15h): Supported NS-Inventory-Change 00:27:30.322 Directive Send (19h): Supported 00:27:30.322 Directive Receive (1Ah): Supported 00:27:30.322 Virtualization Management (1Ch): Supported 00:27:30.322 Doorbell Buffer Config (7Ch): Supported 00:27:30.322 Format NVM (80h): Supported LBA-Change 00:27:30.322 I/O Commands 00:27:30.322 ------------ 00:27:30.322 Flush (00h): Supported LBA-Change 00:27:30.322 Write (01h): Supported LBA-Change 00:27:30.322 Read (02h): Supported 00:27:30.322 Compare (05h): Supported 00:27:30.322 Write Zeroes (08h): Supported LBA-Change 00:27:30.322 Dataset Management (09h): Supported LBA-Change 00:27:30.322 Unknown (0Ch): Supported 00:27:30.322 Unknown (12h): Supported 00:27:30.322 Copy (19h): Supported LBA-Change 00:27:30.322 Unknown (1Dh): Supported LBA-Change 00:27:30.322 00:27:30.322 Error Log 00:27:30.322 ========= 00:27:30.322 00:27:30.322 Arbitration 00:27:30.322 =========== 00:27:30.322 Arbitration Burst: no limit 00:27:30.322 00:27:30.322 Power Management 00:27:30.322 ================ 00:27:30.322 Number of Power States: 1 00:27:30.322 Current Power State: Power State #0 00:27:30.322 Power State #0: 00:27:30.322 Max Power: 25.00 W 00:27:30.322 Non-Operational State: Operational 00:27:30.322 Entry Latency: 16 microseconds 00:27:30.322 Exit Latency: 4 microseconds 00:27:30.322 Relative Read Throughput: 0 00:27:30.322 Relative Read Latency: 0 00:27:30.322 Relative Write Throughput: 0 00:27:30.322 Relative Write Latency: 0 00:27:30.322 Idle Power: Not Reported 00:27:30.322 Active Power: Not Reported 00:27:30.322 Non-Operational Permissive Mode: Not Supported 00:27:30.322 00:27:30.322 Health Information 00:27:30.322 ================== 00:27:30.322 Critical Warnings: 00:27:30.322 Available Spare Space: OK 00:27:30.322 Temperature: OK 00:27:30.322 Device Reliability: OK 00:27:30.322 Read Only: No 00:27:30.322 Volatile Memory Backup: OK 00:27:30.322 Current Temperature: 323 Kelvin (50 Celsius) 00:27:30.322 Temperature Threshold: 343 Kelvin (70 Celsius) 00:27:30.322 Available Spare: 0% 00:27:30.322 Available Spare Threshold: 0% 00:27:30.322 Life Percentage Used: 0% 00:27:30.322 Data Units Read: 2284 00:27:30.322 Data Units Written: 1964 00:27:30.322 Host Read Commands: 95707 00:27:30.322 Host Write Commands: 91477 00:27:30.322 Controller Busy Time: 0 minutes 00:27:30.322 Power Cycles: 0 00:27:30.322 Power On Hours: 0 hours 00:27:30.322 Unsafe Shutdowns: 0 00:27:30.322 Unrecoverable Media Errors: 0 00:27:30.322 Lifetime Error Log Entries: 0 00:27:30.322 Warning Temperature Time: 0 minutes 00:27:30.322 Critical Temperature Time: 0 minutes 00:27:30.322 00:27:30.322 Number of Queues 00:27:30.322 ================ 00:27:30.322 Number of I/O Submission Queues: 64 00:27:30.322 Number of I/O Completion Queues: 64 00:27:30.322 00:27:30.322 ZNS Specific Controller Data 00:27:30.322 ============================ 00:27:30.322 Zone Append Size Limit: 0 00:27:30.322 00:27:30.322 00:27:30.322 Active Namespaces 00:27:30.322 ================= 00:27:30.322 Namespace ID:1 00:27:30.322 Error Recovery Timeout: Unlimited 00:27:30.322 Command Set Identifier: NVM (00h) 00:27:30.323 Deallocate: Supported 00:27:30.323 Deallocated/Unwritten Error: Supported 00:27:30.323 Deallocated Read Value: All 0x00 00:27:30.323 Deallocate in Write Zeroes: Not Supported 00:27:30.323 Deallocated Guard Field: 0xFFFF 00:27:30.323 Flush: Supported 00:27:30.323 Reservation: Not Supported 00:27:30.323 Namespace Sharing Capabilities: Private 00:27:30.323 Size (in LBAs): 1048576 (4GiB) 00:27:30.323 Capacity (in LBAs): 1048576 (4GiB) 00:27:30.323 Utilization (in LBAs): 1048576 (4GiB) 00:27:30.323 Thin Provisioning: Not Supported 00:27:30.323 Per-NS Atomic Units: No 00:27:30.323 Maximum Single Source Range Length: 128 00:27:30.323 Maximum Copy Length: 128 00:27:30.323 Maximum Source Range Count: 128 00:27:30.323 NGUID/EUI64 Never Reused: No 00:27:30.323 Namespace Write Protected: No 00:27:30.323 Number of LBA Formats: 8 00:27:30.323 Current LBA Format: LBA Format #04 00:27:30.323 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:30.323 LBA Format #01: Data Size: 512 Metadata Size: 8 00:27:30.323 LBA Format #02: Data Size: 512 Metadata Size: 16 00:27:30.323 LBA Format #03: Data Size: 512 Metadata Size: 64 00:27:30.323 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:27:30.323 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:27:30.323 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:27:30.323 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:27:30.323 00:27:30.323 NVM Specific Namespace Data 00:27:30.323 =========================== 00:27:30.323 Logical Block Storage Tag Mask: 0 00:27:30.323 Protection Information Capabilities: 00:27:30.323 16b Guard Protection Information Storage Tag Support: No 00:27:30.323 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:27:30.323 Storage Tag Check Read Support: No 00:27:30.323 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.323 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.323 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.323 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.323 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.323 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.323 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.323 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.323 Namespace ID:2 00:27:30.323 Error Recovery Timeout: Unlimited 00:27:30.323 Command Set Identifier: NVM (00h) 00:27:30.323 Deallocate: Supported 00:27:30.323 Deallocated/Unwritten Error: Supported 00:27:30.323 Deallocated Read Value: All 0x00 00:27:30.323 Deallocate in Write Zeroes: Not Supported 00:27:30.323 Deallocated Guard Field: 0xFFFF 00:27:30.323 Flush: Supported 00:27:30.323 Reservation: Not Supported 00:27:30.323 Namespace Sharing Capabilities: Private 00:27:30.323 Size (in LBAs): 1048576 (4GiB) 00:27:30.323 Capacity (in LBAs): 1048576 (4GiB) 00:27:30.323 Utilization (in LBAs): 1048576 (4GiB) 00:27:30.323 Thin Provisioning: Not Supported 00:27:30.323 Per-NS Atomic Units: No 00:27:30.323 Maximum Single Source Range Length: 128 00:27:30.323 Maximum Copy Length: 128 00:27:30.323 Maximum Source Range Count: 128 00:27:30.323 NGUID/EUI64 Never Reused: No 00:27:30.323 Namespace Write Protected: No 00:27:30.323 Number of LBA Formats: 8 00:27:30.323 Current LBA Format: LBA Format #04 00:27:30.323 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:30.323 LBA Format #01: Data Size: 512 Metadata Size: 8 00:27:30.323 LBA Format #02: Data Size: 512 Metadata Size: 16 00:27:30.323 LBA Format #03: Data Size: 512 Metadata Size: 64 00:27:30.323 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:27:30.323 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:27:30.323 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:27:30.323 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:27:30.323 00:27:30.323 NVM Specific Namespace Data 00:27:30.323 =========================== 00:27:30.323 Logical Block Storage Tag Mask: 0 00:27:30.323 Protection Information Capabilities: 00:27:30.323 16b Guard Protection Information Storage Tag Support: No 00:27:30.323 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:27:30.323 Storage Tag Check Read Support: No 00:27:30.323 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.323 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.323 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.323 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.323 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.323 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.323 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.323 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.323 Namespace ID:3 00:27:30.323 Error Recovery Timeout: Unlimited 00:27:30.323 Command Set Identifier: NVM (00h) 00:27:30.323 Deallocate: Supported 00:27:30.323 Deallocated/Unwritten Error: Supported 00:27:30.323 Deallocated Read Value: All 0x00 00:27:30.323 Deallocate in Write Zeroes: Not Supported 00:27:30.323 Deallocated Guard Field: 0xFFFF 00:27:30.323 Flush: Supported 00:27:30.323 Reservation: Not Supported 00:27:30.323 Namespace Sharing Capabilities: Private 00:27:30.323 Size (in LBAs): 1048576 (4GiB) 00:27:30.323 Capacity (in LBAs): 1048576 (4GiB) 00:27:30.323 Utilization (in LBAs): 1048576 (4GiB) 00:27:30.323 Thin Provisioning: Not Supported 00:27:30.323 Per-NS Atomic Units: No 00:27:30.323 Maximum Single Source Range Length: 128 00:27:30.323 Maximum Copy Length: 128 00:27:30.323 Maximum Source Range Count: 128 00:27:30.323 NGUID/EUI64 Never Reused: No 00:27:30.323 Namespace Write Protected: No 00:27:30.323 Number of LBA Formats: 8 00:27:30.323 Current LBA Format: LBA Format #04 00:27:30.323 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:30.324 LBA Format #01: Data Size: 512 Metadata Size: 8 00:27:30.324 LBA Format #02: Data Size: 512 Metadata Size: 16 00:27:30.324 LBA Format #03: Data Size: 512 Metadata Size: 64 00:27:30.324 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:27:30.324 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:27:30.324 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:27:30.324 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:27:30.324 00:27:30.324 NVM Specific Namespace Data 00:27:30.324 =========================== 00:27:30.324 Logical Block Storage Tag Mask: 0 00:27:30.324 Protection Information Capabilities: 00:27:30.324 16b Guard Protection Information Storage Tag Support: No 00:27:30.324 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:27:30.324 Storage Tag Check Read Support: No 00:27:30.324 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.324 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.324 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.324 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.324 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.324 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.324 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.324 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.324 07:36:08 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:27:30.324 07:36:08 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:27:30.582 ===================================================== 00:27:30.582 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:27:30.582 ===================================================== 00:27:30.582 Controller Capabilities/Features 00:27:30.582 ================================ 00:27:30.582 Vendor ID: 1b36 00:27:30.582 Subsystem Vendor ID: 1af4 00:27:30.582 Serial Number: 12340 00:27:30.582 Model Number: QEMU NVMe Ctrl 00:27:30.582 Firmware Version: 8.0.0 00:27:30.582 Recommended Arb Burst: 6 00:27:30.582 IEEE OUI Identifier: 00 54 52 00:27:30.582 Multi-path I/O 00:27:30.582 May have multiple subsystem ports: No 00:27:30.582 May have multiple controllers: No 00:27:30.582 Associated with SR-IOV VF: No 00:27:30.582 Max Data Transfer Size: 524288 00:27:30.582 Max Number of Namespaces: 256 00:27:30.582 Max Number of I/O Queues: 64 00:27:30.582 NVMe Specification Version (VS): 1.4 00:27:30.582 NVMe Specification Version (Identify): 1.4 00:27:30.582 Maximum Queue Entries: 2048 00:27:30.582 Contiguous Queues Required: Yes 00:27:30.582 Arbitration Mechanisms Supported 00:27:30.582 Weighted Round Robin: Not Supported 00:27:30.582 Vendor Specific: Not Supported 00:27:30.582 Reset Timeout: 7500 ms 00:27:30.582 Doorbell Stride: 4 bytes 00:27:30.582 NVM Subsystem Reset: Not Supported 00:27:30.582 Command Sets Supported 00:27:30.582 NVM Command Set: Supported 00:27:30.582 Boot Partition: Not Supported 00:27:30.582 Memory Page Size Minimum: 4096 bytes 00:27:30.582 Memory Page Size Maximum: 65536 bytes 00:27:30.582 Persistent Memory Region: Not Supported 00:27:30.582 Optional Asynchronous Events Supported 00:27:30.582 Namespace Attribute Notices: Supported 00:27:30.582 Firmware Activation Notices: Not Supported 00:27:30.582 ANA Change Notices: Not Supported 00:27:30.582 PLE Aggregate Log Change Notices: Not Supported 00:27:30.582 LBA Status Info Alert Notices: Not Supported 00:27:30.582 EGE Aggregate Log Change Notices: Not Supported 00:27:30.582 Normal NVM Subsystem Shutdown event: Not Supported 00:27:30.582 Zone Descriptor Change Notices: Not Supported 00:27:30.582 Discovery Log Change Notices: Not Supported 00:27:30.582 Controller Attributes 00:27:30.582 128-bit Host Identifier: Not Supported 00:27:30.582 Non-Operational Permissive Mode: Not Supported 00:27:30.582 NVM Sets: Not Supported 00:27:30.582 Read Recovery Levels: Not Supported 00:27:30.583 Endurance Groups: Not Supported 00:27:30.583 Predictable Latency Mode: Not Supported 00:27:30.583 Traffic Based Keep ALive: Not Supported 00:27:30.583 Namespace Granularity: Not Supported 00:27:30.583 SQ Associations: Not Supported 00:27:30.583 UUID List: Not Supported 00:27:30.583 Multi-Domain Subsystem: Not Supported 00:27:30.583 Fixed Capacity Management: Not Supported 00:27:30.583 Variable Capacity Management: Not Supported 00:27:30.583 Delete Endurance Group: Not Supported 00:27:30.583 Delete NVM Set: Not Supported 00:27:30.583 Extended LBA Formats Supported: Supported 00:27:30.583 Flexible Data Placement Supported: Not Supported 00:27:30.583 00:27:30.583 Controller Memory Buffer Support 00:27:30.583 ================================ 00:27:30.583 Supported: No 00:27:30.583 00:27:30.583 Persistent Memory Region Support 00:27:30.583 ================================ 00:27:30.583 Supported: No 00:27:30.583 00:27:30.583 Admin Command Set Attributes 00:27:30.583 ============================ 00:27:30.583 Security Send/Receive: Not Supported 00:27:30.583 Format NVM: Supported 00:27:30.583 Firmware Activate/Download: Not Supported 00:27:30.583 Namespace Management: Supported 00:27:30.583 Device Self-Test: Not Supported 00:27:30.583 Directives: Supported 00:27:30.583 NVMe-MI: Not Supported 00:27:30.583 Virtualization Management: Not Supported 00:27:30.583 Doorbell Buffer Config: Supported 00:27:30.583 Get LBA Status Capability: Not Supported 00:27:30.583 Command & Feature Lockdown Capability: Not Supported 00:27:30.583 Abort Command Limit: 4 00:27:30.583 Async Event Request Limit: 4 00:27:30.583 Number of Firmware Slots: N/A 00:27:30.583 Firmware Slot 1 Read-Only: N/A 00:27:30.583 Firmware Activation Without Reset: N/A 00:27:30.583 Multiple Update Detection Support: N/A 00:27:30.583 Firmware Update Granularity: No Information Provided 00:27:30.583 Per-Namespace SMART Log: Yes 00:27:30.583 Asymmetric Namespace Access Log Page: Not Supported 00:27:30.583 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:27:30.583 Command Effects Log Page: Supported 00:27:30.583 Get Log Page Extended Data: Supported 00:27:30.583 Telemetry Log Pages: Not Supported 00:27:30.583 Persistent Event Log Pages: Not Supported 00:27:30.583 Supported Log Pages Log Page: May Support 00:27:30.583 Commands Supported & Effects Log Page: Not Supported 00:27:30.583 Feature Identifiers & Effects Log Page:May Support 00:27:30.583 NVMe-MI Commands & Effects Log Page: May Support 00:27:30.583 Data Area 4 for Telemetry Log: Not Supported 00:27:30.583 Error Log Page Entries Supported: 1 00:27:30.583 Keep Alive: Not Supported 00:27:30.583 00:27:30.583 NVM Command Set Attributes 00:27:30.583 ========================== 00:27:30.583 Submission Queue Entry Size 00:27:30.583 Max: 64 00:27:30.583 Min: 64 00:27:30.583 Completion Queue Entry Size 00:27:30.583 Max: 16 00:27:30.583 Min: 16 00:27:30.583 Number of Namespaces: 256 00:27:30.583 Compare Command: Supported 00:27:30.583 Write Uncorrectable Command: Not Supported 00:27:30.583 Dataset Management Command: Supported 00:27:30.583 Write Zeroes Command: Supported 00:27:30.583 Set Features Save Field: Supported 00:27:30.583 Reservations: Not Supported 00:27:30.583 Timestamp: Supported 00:27:30.583 Copy: Supported 00:27:30.583 Volatile Write Cache: Present 00:27:30.583 Atomic Write Unit (Normal): 1 00:27:30.583 Atomic Write Unit (PFail): 1 00:27:30.583 Atomic Compare & Write Unit: 1 00:27:30.583 Fused Compare & Write: Not Supported 00:27:30.583 Scatter-Gather List 00:27:30.583 SGL Command Set: Supported 00:27:30.583 SGL Keyed: Not Supported 00:27:30.583 SGL Bit Bucket Descriptor: Not Supported 00:27:30.583 SGL Metadata Pointer: Not Supported 00:27:30.583 Oversized SGL: Not Supported 00:27:30.583 SGL Metadata Address: Not Supported 00:27:30.583 SGL Offset: Not Supported 00:27:30.583 Transport SGL Data Block: Not Supported 00:27:30.583 Replay Protected Memory Block: Not Supported 00:27:30.583 00:27:30.583 Firmware Slot Information 00:27:30.583 ========================= 00:27:30.583 Active slot: 1 00:27:30.583 Slot 1 Firmware Revision: 1.0 00:27:30.583 00:27:30.583 00:27:30.583 Commands Supported and Effects 00:27:30.583 ============================== 00:27:30.583 Admin Commands 00:27:30.583 -------------- 00:27:30.583 Delete I/O Submission Queue (00h): Supported 00:27:30.583 Create I/O Submission Queue (01h): Supported 00:27:30.583 Get Log Page (02h): Supported 00:27:30.583 Delete I/O Completion Queue (04h): Supported 00:27:30.583 Create I/O Completion Queue (05h): Supported 00:27:30.583 Identify (06h): Supported 00:27:30.583 Abort (08h): Supported 00:27:30.583 Set Features (09h): Supported 00:27:30.583 Get Features (0Ah): Supported 00:27:30.583 Asynchronous Event Request (0Ch): Supported 00:27:30.583 Namespace Attachment (15h): Supported NS-Inventory-Change 00:27:30.583 Directive Send (19h): Supported 00:27:30.583 Directive Receive (1Ah): Supported 00:27:30.583 Virtualization Management (1Ch): Supported 00:27:30.583 Doorbell Buffer Config (7Ch): Supported 00:27:30.583 Format NVM (80h): Supported LBA-Change 00:27:30.583 I/O Commands 00:27:30.583 ------------ 00:27:30.583 Flush (00h): Supported LBA-Change 00:27:30.583 Write (01h): Supported LBA-Change 00:27:30.583 Read (02h): Supported 00:27:30.583 Compare (05h): Supported 00:27:30.583 Write Zeroes (08h): Supported LBA-Change 00:27:30.584 Dataset Management (09h): Supported LBA-Change 00:27:30.584 Unknown (0Ch): Supported 00:27:30.584 Unknown (12h): Supported 00:27:30.584 Copy (19h): Supported LBA-Change 00:27:30.584 Unknown (1Dh): Supported LBA-Change 00:27:30.584 00:27:30.584 Error Log 00:27:30.584 ========= 00:27:30.584 00:27:30.584 Arbitration 00:27:30.584 =========== 00:27:30.584 Arbitration Burst: no limit 00:27:30.584 00:27:30.584 Power Management 00:27:30.584 ================ 00:27:30.584 Number of Power States: 1 00:27:30.584 Current Power State: Power State #0 00:27:30.584 Power State #0: 00:27:30.584 Max Power: 25.00 W 00:27:30.584 Non-Operational State: Operational 00:27:30.584 Entry Latency: 16 microseconds 00:27:30.584 Exit Latency: 4 microseconds 00:27:30.584 Relative Read Throughput: 0 00:27:30.584 Relative Read Latency: 0 00:27:30.584 Relative Write Throughput: 0 00:27:30.584 Relative Write Latency: 0 00:27:30.584 Idle Power: Not Reported 00:27:30.584 Active Power: Not Reported 00:27:30.584 Non-Operational Permissive Mode: Not Supported 00:27:30.584 00:27:30.584 Health Information 00:27:30.584 ================== 00:27:30.584 Critical Warnings: 00:27:30.584 Available Spare Space: OK 00:27:30.584 Temperature: OK 00:27:30.584 Device Reliability: OK 00:27:30.584 Read Only: No 00:27:30.584 Volatile Memory Backup: OK 00:27:30.584 Current Temperature: 323 Kelvin (50 Celsius) 00:27:30.584 Temperature Threshold: 343 Kelvin (70 Celsius) 00:27:30.584 Available Spare: 0% 00:27:30.584 Available Spare Threshold: 0% 00:27:30.584 Life Percentage Used: 0% 00:27:30.584 Data Units Read: 1068 00:27:30.584 Data Units Written: 905 00:27:30.584 Host Read Commands: 45319 00:27:30.584 Host Write Commands: 43861 00:27:30.584 Controller Busy Time: 0 minutes 00:27:30.584 Power Cycles: 0 00:27:30.584 Power On Hours: 0 hours 00:27:30.584 Unsafe Shutdowns: 0 00:27:30.584 Unrecoverable Media Errors: 0 00:27:30.584 Lifetime Error Log Entries: 0 00:27:30.584 Warning Temperature Time: 0 minutes 00:27:30.584 Critical Temperature Time: 0 minutes 00:27:30.584 00:27:30.584 Number of Queues 00:27:30.584 ================ 00:27:30.584 Number of I/O Submission Queues: 64 00:27:30.584 Number of I/O Completion Queues: 64 00:27:30.584 00:27:30.584 ZNS Specific Controller Data 00:27:30.584 ============================ 00:27:30.584 Zone Append Size Limit: 0 00:27:30.584 00:27:30.584 00:27:30.584 Active Namespaces 00:27:30.584 ================= 00:27:30.584 Namespace ID:1 00:27:30.584 Error Recovery Timeout: Unlimited 00:27:30.584 Command Set Identifier: NVM (00h) 00:27:30.584 Deallocate: Supported 00:27:30.584 Deallocated/Unwritten Error: Supported 00:27:30.584 Deallocated Read Value: All 0x00 00:27:30.584 Deallocate in Write Zeroes: Not Supported 00:27:30.584 Deallocated Guard Field: 0xFFFF 00:27:30.584 Flush: Supported 00:27:30.584 Reservation: Not Supported 00:27:30.584 Metadata Transferred as: Separate Metadata Buffer 00:27:30.584 Namespace Sharing Capabilities: Private 00:27:30.584 Size (in LBAs): 1548666 (5GiB) 00:27:30.584 Capacity (in LBAs): 1548666 (5GiB) 00:27:30.584 Utilization (in LBAs): 1548666 (5GiB) 00:27:30.584 Thin Provisioning: Not Supported 00:27:30.584 Per-NS Atomic Units: No 00:27:30.584 Maximum Single Source Range Length: 128 00:27:30.584 Maximum Copy Length: 128 00:27:30.584 Maximum Source Range Count: 128 00:27:30.584 NGUID/EUI64 Never Reused: No 00:27:30.584 Namespace Write Protected: No 00:27:30.584 Number of LBA Formats: 8 00:27:30.584 Current LBA Format: LBA Format #07 00:27:30.584 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:30.584 LBA Format #01: Data Size: 512 Metadata Size: 8 00:27:30.584 LBA Format #02: Data Size: 512 Metadata Size: 16 00:27:30.584 LBA Format #03: Data Size: 512 Metadata Size: 64 00:27:30.584 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:27:30.584 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:27:30.584 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:27:30.584 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:27:30.584 00:27:30.584 NVM Specific Namespace Data 00:27:30.584 =========================== 00:27:30.584 Logical Block Storage Tag Mask: 0 00:27:30.584 Protection Information Capabilities: 00:27:30.584 16b Guard Protection Information Storage Tag Support: No 00:27:30.584 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:27:30.584 Storage Tag Check Read Support: No 00:27:30.584 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.584 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.584 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.584 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.584 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.584 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.584 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.584 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.584 07:36:09 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:27:30.584 07:36:09 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:27:30.843 ===================================================== 00:27:30.843 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:27:30.843 ===================================================== 00:27:30.843 Controller Capabilities/Features 00:27:30.843 ================================ 00:27:30.843 Vendor ID: 1b36 00:27:30.843 Subsystem Vendor ID: 1af4 00:27:30.843 Serial Number: 12341 00:27:30.843 Model Number: QEMU NVMe Ctrl 00:27:30.843 Firmware Version: 8.0.0 00:27:30.843 Recommended Arb Burst: 6 00:27:30.843 IEEE OUI Identifier: 00 54 52 00:27:30.843 Multi-path I/O 00:27:30.843 May have multiple subsystem ports: No 00:27:30.843 May have multiple controllers: No 00:27:30.843 Associated with SR-IOV VF: No 00:27:30.843 Max Data Transfer Size: 524288 00:27:30.843 Max Number of Namespaces: 256 00:27:30.843 Max Number of I/O Queues: 64 00:27:30.843 NVMe Specification Version (VS): 1.4 00:27:30.843 NVMe Specification Version (Identify): 1.4 00:27:30.843 Maximum Queue Entries: 2048 00:27:30.843 Contiguous Queues Required: Yes 00:27:30.843 Arbitration Mechanisms Supported 00:27:30.843 Weighted Round Robin: Not Supported 00:27:30.843 Vendor Specific: Not Supported 00:27:30.843 Reset Timeout: 7500 ms 00:27:30.843 Doorbell Stride: 4 bytes 00:27:30.843 NVM Subsystem Reset: Not Supported 00:27:30.843 Command Sets Supported 00:27:30.843 NVM Command Set: Supported 00:27:30.843 Boot Partition: Not Supported 00:27:30.843 Memory Page Size Minimum: 4096 bytes 00:27:30.843 Memory Page Size Maximum: 65536 bytes 00:27:30.843 Persistent Memory Region: Not Supported 00:27:30.843 Optional Asynchronous Events Supported 00:27:30.843 Namespace Attribute Notices: Supported 00:27:30.843 Firmware Activation Notices: Not Supported 00:27:30.843 ANA Change Notices: Not Supported 00:27:30.843 PLE Aggregate Log Change Notices: Not Supported 00:27:30.843 LBA Status Info Alert Notices: Not Supported 00:27:30.843 EGE Aggregate Log Change Notices: Not Supported 00:27:30.843 Normal NVM Subsystem Shutdown event: Not Supported 00:27:30.843 Zone Descriptor Change Notices: Not Supported 00:27:30.844 Discovery Log Change Notices: Not Supported 00:27:30.844 Controller Attributes 00:27:30.844 128-bit Host Identifier: Not Supported 00:27:30.844 Non-Operational Permissive Mode: Not Supported 00:27:30.844 NVM Sets: Not Supported 00:27:30.844 Read Recovery Levels: Not Supported 00:27:30.844 Endurance Groups: Not Supported 00:27:30.844 Predictable Latency Mode: Not Supported 00:27:30.844 Traffic Based Keep ALive: Not Supported 00:27:30.844 Namespace Granularity: Not Supported 00:27:30.844 SQ Associations: Not Supported 00:27:30.844 UUID List: Not Supported 00:27:30.844 Multi-Domain Subsystem: Not Supported 00:27:30.844 Fixed Capacity Management: Not Supported 00:27:30.844 Variable Capacity Management: Not Supported 00:27:30.844 Delete Endurance Group: Not Supported 00:27:30.844 Delete NVM Set: Not Supported 00:27:30.844 Extended LBA Formats Supported: Supported 00:27:30.844 Flexible Data Placement Supported: Not Supported 00:27:30.844 00:27:30.844 Controller Memory Buffer Support 00:27:30.844 ================================ 00:27:30.844 Supported: No 00:27:30.844 00:27:30.844 Persistent Memory Region Support 00:27:30.844 ================================ 00:27:30.844 Supported: No 00:27:30.844 00:27:30.844 Admin Command Set Attributes 00:27:30.844 ============================ 00:27:30.844 Security Send/Receive: Not Supported 00:27:30.844 Format NVM: Supported 00:27:30.844 Firmware Activate/Download: Not Supported 00:27:30.844 Namespace Management: Supported 00:27:30.844 Device Self-Test: Not Supported 00:27:30.844 Directives: Supported 00:27:30.844 NVMe-MI: Not Supported 00:27:30.844 Virtualization Management: Not Supported 00:27:30.844 Doorbell Buffer Config: Supported 00:27:30.844 Get LBA Status Capability: Not Supported 00:27:30.844 Command & Feature Lockdown Capability: Not Supported 00:27:30.844 Abort Command Limit: 4 00:27:30.844 Async Event Request Limit: 4 00:27:30.844 Number of Firmware Slots: N/A 00:27:30.844 Firmware Slot 1 Read-Only: N/A 00:27:30.844 Firmware Activation Without Reset: N/A 00:27:30.844 Multiple Update Detection Support: N/A 00:27:30.844 Firmware Update Granularity: No Information Provided 00:27:30.844 Per-Namespace SMART Log: Yes 00:27:30.844 Asymmetric Namespace Access Log Page: Not Supported 00:27:30.844 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:27:30.844 Command Effects Log Page: Supported 00:27:30.844 Get Log Page Extended Data: Supported 00:27:30.844 Telemetry Log Pages: Not Supported 00:27:30.844 Persistent Event Log Pages: Not Supported 00:27:30.844 Supported Log Pages Log Page: May Support 00:27:30.844 Commands Supported & Effects Log Page: Not Supported 00:27:30.844 Feature Identifiers & Effects Log Page:May Support 00:27:30.844 NVMe-MI Commands & Effects Log Page: May Support 00:27:30.844 Data Area 4 for Telemetry Log: Not Supported 00:27:30.844 Error Log Page Entries Supported: 1 00:27:30.844 Keep Alive: Not Supported 00:27:30.844 00:27:30.844 NVM Command Set Attributes 00:27:30.844 ========================== 00:27:30.844 Submission Queue Entry Size 00:27:30.844 Max: 64 00:27:30.844 Min: 64 00:27:30.844 Completion Queue Entry Size 00:27:30.844 Max: 16 00:27:30.844 Min: 16 00:27:30.844 Number of Namespaces: 256 00:27:30.844 Compare Command: Supported 00:27:30.844 Write Uncorrectable Command: Not Supported 00:27:30.844 Dataset Management Command: Supported 00:27:30.844 Write Zeroes Command: Supported 00:27:30.844 Set Features Save Field: Supported 00:27:30.844 Reservations: Not Supported 00:27:30.844 Timestamp: Supported 00:27:30.844 Copy: Supported 00:27:30.844 Volatile Write Cache: Present 00:27:30.844 Atomic Write Unit (Normal): 1 00:27:30.844 Atomic Write Unit (PFail): 1 00:27:30.844 Atomic Compare & Write Unit: 1 00:27:30.844 Fused Compare & Write: Not Supported 00:27:30.844 Scatter-Gather List 00:27:30.844 SGL Command Set: Supported 00:27:30.844 SGL Keyed: Not Supported 00:27:30.844 SGL Bit Bucket Descriptor: Not Supported 00:27:30.844 SGL Metadata Pointer: Not Supported 00:27:30.844 Oversized SGL: Not Supported 00:27:30.844 SGL Metadata Address: Not Supported 00:27:30.844 SGL Offset: Not Supported 00:27:30.844 Transport SGL Data Block: Not Supported 00:27:30.844 Replay Protected Memory Block: Not Supported 00:27:30.844 00:27:30.844 Firmware Slot Information 00:27:30.844 ========================= 00:27:30.844 Active slot: 1 00:27:30.844 Slot 1 Firmware Revision: 1.0 00:27:30.844 00:27:30.844 00:27:30.844 Commands Supported and Effects 00:27:30.844 ============================== 00:27:30.844 Admin Commands 00:27:30.844 -------------- 00:27:30.844 Delete I/O Submission Queue (00h): Supported 00:27:30.844 Create I/O Submission Queue (01h): Supported 00:27:30.844 Get Log Page (02h): Supported 00:27:30.844 Delete I/O Completion Queue (04h): Supported 00:27:30.844 Create I/O Completion Queue (05h): Supported 00:27:30.844 Identify (06h): Supported 00:27:30.844 Abort (08h): Supported 00:27:30.844 Set Features (09h): Supported 00:27:30.844 Get Features (0Ah): Supported 00:27:30.844 Asynchronous Event Request (0Ch): Supported 00:27:30.844 Namespace Attachment (15h): Supported NS-Inventory-Change 00:27:30.844 Directive Send (19h): Supported 00:27:30.844 Directive Receive (1Ah): Supported 00:27:30.844 Virtualization Management (1Ch): Supported 00:27:30.844 Doorbell Buffer Config (7Ch): Supported 00:27:30.844 Format NVM (80h): Supported LBA-Change 00:27:30.844 I/O Commands 00:27:30.844 ------------ 00:27:30.844 Flush (00h): Supported LBA-Change 00:27:30.844 Write (01h): Supported LBA-Change 00:27:30.844 Read (02h): Supported 00:27:30.844 Compare (05h): Supported 00:27:30.844 Write Zeroes (08h): Supported LBA-Change 00:27:30.844 Dataset Management (09h): Supported LBA-Change 00:27:30.844 Unknown (0Ch): Supported 00:27:30.844 Unknown (12h): Supported 00:27:30.844 Copy (19h): Supported LBA-Change 00:27:30.844 Unknown (1Dh): Supported LBA-Change 00:27:30.844 00:27:30.844 Error Log 00:27:30.844 ========= 00:27:30.844 00:27:30.844 Arbitration 00:27:30.844 =========== 00:27:30.844 Arbitration Burst: no limit 00:27:30.844 00:27:30.844 Power Management 00:27:30.844 ================ 00:27:30.844 Number of Power States: 1 00:27:30.844 Current Power State: Power State #0 00:27:30.844 Power State #0: 00:27:30.844 Max Power: 25.00 W 00:27:30.844 Non-Operational State: Operational 00:27:30.844 Entry Latency: 16 microseconds 00:27:30.844 Exit Latency: 4 microseconds 00:27:30.844 Relative Read Throughput: 0 00:27:30.844 Relative Read Latency: 0 00:27:30.844 Relative Write Throughput: 0 00:27:30.844 Relative Write Latency: 0 00:27:30.844 Idle Power: Not Reported 00:27:30.844 Active Power: Not Reported 00:27:30.844 Non-Operational Permissive Mode: Not Supported 00:27:30.844 00:27:30.844 Health Information 00:27:30.844 ================== 00:27:30.844 Critical Warnings: 00:27:30.844 Available Spare Space: OK 00:27:30.844 Temperature: OK 00:27:30.844 Device Reliability: OK 00:27:30.844 Read Only: No 00:27:30.844 Volatile Memory Backup: OK 00:27:30.844 Current Temperature: 323 Kelvin (50 Celsius) 00:27:30.844 Temperature Threshold: 343 Kelvin (70 Celsius) 00:27:30.844 Available Spare: 0% 00:27:30.844 Available Spare Threshold: 0% 00:27:30.844 Life Percentage Used: 0% 00:27:30.844 Data Units Read: 770 00:27:30.844 Data Units Written: 621 00:27:30.844 Host Read Commands: 32441 00:27:30.844 Host Write Commands: 30218 00:27:30.844 Controller Busy Time: 0 minutes 00:27:30.844 Power Cycles: 0 00:27:30.844 Power On Hours: 0 hours 00:27:30.844 Unsafe Shutdowns: 0 00:27:30.844 Unrecoverable Media Errors: 0 00:27:30.844 Lifetime Error Log Entries: 0 00:27:30.844 Warning Temperature Time: 0 minutes 00:27:30.844 Critical Temperature Time: 0 minutes 00:27:30.844 00:27:30.844 Number of Queues 00:27:30.844 ================ 00:27:30.844 Number of I/O Submission Queues: 64 00:27:30.844 Number of I/O Completion Queues: 64 00:27:30.844 00:27:30.844 ZNS Specific Controller Data 00:27:30.844 ============================ 00:27:30.844 Zone Append Size Limit: 0 00:27:30.844 00:27:30.844 00:27:30.844 Active Namespaces 00:27:30.844 ================= 00:27:30.844 Namespace ID:1 00:27:30.844 Error Recovery Timeout: Unlimited 00:27:30.844 Command Set Identifier: NVM (00h) 00:27:30.844 Deallocate: Supported 00:27:30.844 Deallocated/Unwritten Error: Supported 00:27:30.844 Deallocated Read Value: All 0x00 00:27:30.844 Deallocate in Write Zeroes: Not Supported 00:27:30.844 Deallocated Guard Field: 0xFFFF 00:27:30.844 Flush: Supported 00:27:30.844 Reservation: Not Supported 00:27:30.844 Namespace Sharing Capabilities: Private 00:27:30.844 Size (in LBAs): 1310720 (5GiB) 00:27:30.844 Capacity (in LBAs): 1310720 (5GiB) 00:27:30.844 Utilization (in LBAs): 1310720 (5GiB) 00:27:30.844 Thin Provisioning: Not Supported 00:27:30.844 Per-NS Atomic Units: No 00:27:30.844 Maximum Single Source Range Length: 128 00:27:30.844 Maximum Copy Length: 128 00:27:30.844 Maximum Source Range Count: 128 00:27:30.844 NGUID/EUI64 Never Reused: No 00:27:30.844 Namespace Write Protected: No 00:27:30.844 Number of LBA Formats: 8 00:27:30.844 Current LBA Format: LBA Format #04 00:27:30.844 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:30.844 LBA Format #01: Data Size: 512 Metadata Size: 8 00:27:30.844 LBA Format #02: Data Size: 512 Metadata Size: 16 00:27:30.844 LBA Format #03: Data Size: 512 Metadata Size: 64 00:27:30.844 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:27:30.844 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:27:30.844 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:27:30.844 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:27:30.844 00:27:30.844 NVM Specific Namespace Data 00:27:30.844 =========================== 00:27:30.844 Logical Block Storage Tag Mask: 0 00:27:30.844 Protection Information Capabilities: 00:27:30.844 16b Guard Protection Information Storage Tag Support: No 00:27:30.844 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:27:30.844 Storage Tag Check Read Support: No 00:27:30.844 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.844 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.844 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.844 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.844 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.844 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.844 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.844 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:30.844 07:36:09 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:27:30.844 07:36:09 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:27:31.102 ===================================================== 00:27:31.102 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:27:31.102 ===================================================== 00:27:31.102 Controller Capabilities/Features 00:27:31.102 ================================ 00:27:31.102 Vendor ID: 1b36 00:27:31.102 Subsystem Vendor ID: 1af4 00:27:31.102 Serial Number: 12342 00:27:31.102 Model Number: QEMU NVMe Ctrl 00:27:31.102 Firmware Version: 8.0.0 00:27:31.102 Recommended Arb Burst: 6 00:27:31.102 IEEE OUI Identifier: 00 54 52 00:27:31.102 Multi-path I/O 00:27:31.102 May have multiple subsystem ports: No 00:27:31.102 May have multiple controllers: No 00:27:31.102 Associated with SR-IOV VF: No 00:27:31.102 Max Data Transfer Size: 524288 00:27:31.102 Max Number of Namespaces: 256 00:27:31.102 Max Number of I/O Queues: 64 00:27:31.102 NVMe Specification Version (VS): 1.4 00:27:31.102 NVMe Specification Version (Identify): 1.4 00:27:31.102 Maximum Queue Entries: 2048 00:27:31.102 Contiguous Queues Required: Yes 00:27:31.102 Arbitration Mechanisms Supported 00:27:31.102 Weighted Round Robin: Not Supported 00:27:31.102 Vendor Specific: Not Supported 00:27:31.102 Reset Timeout: 7500 ms 00:27:31.102 Doorbell Stride: 4 bytes 00:27:31.102 NVM Subsystem Reset: Not Supported 00:27:31.102 Command Sets Supported 00:27:31.102 NVM Command Set: Supported 00:27:31.102 Boot Partition: Not Supported 00:27:31.102 Memory Page Size Minimum: 4096 bytes 00:27:31.102 Memory Page Size Maximum: 65536 bytes 00:27:31.102 Persistent Memory Region: Not Supported 00:27:31.102 Optional Asynchronous Events Supported 00:27:31.102 Namespace Attribute Notices: Supported 00:27:31.102 Firmware Activation Notices: Not Supported 00:27:31.102 ANA Change Notices: Not Supported 00:27:31.102 PLE Aggregate Log Change Notices: Not Supported 00:27:31.102 LBA Status Info Alert Notices: Not Supported 00:27:31.102 EGE Aggregate Log Change Notices: Not Supported 00:27:31.102 Normal NVM Subsystem Shutdown event: Not Supported 00:27:31.102 Zone Descriptor Change Notices: Not Supported 00:27:31.102 Discovery Log Change Notices: Not Supported 00:27:31.102 Controller Attributes 00:27:31.102 128-bit Host Identifier: Not Supported 00:27:31.102 Non-Operational Permissive Mode: Not Supported 00:27:31.102 NVM Sets: Not Supported 00:27:31.102 Read Recovery Levels: Not Supported 00:27:31.102 Endurance Groups: Not Supported 00:27:31.102 Predictable Latency Mode: Not Supported 00:27:31.102 Traffic Based Keep ALive: Not Supported 00:27:31.102 Namespace Granularity: Not Supported 00:27:31.102 SQ Associations: Not Supported 00:27:31.102 UUID List: Not Supported 00:27:31.102 Multi-Domain Subsystem: Not Supported 00:27:31.102 Fixed Capacity Management: Not Supported 00:27:31.102 Variable Capacity Management: Not Supported 00:27:31.102 Delete Endurance Group: Not Supported 00:27:31.102 Delete NVM Set: Not Supported 00:27:31.102 Extended LBA Formats Supported: Supported 00:27:31.102 Flexible Data Placement Supported: Not Supported 00:27:31.102 00:27:31.102 Controller Memory Buffer Support 00:27:31.102 ================================ 00:27:31.102 Supported: No 00:27:31.103 00:27:31.103 Persistent Memory Region Support 00:27:31.103 ================================ 00:27:31.103 Supported: No 00:27:31.103 00:27:31.103 Admin Command Set Attributes 00:27:31.103 ============================ 00:27:31.103 Security Send/Receive: Not Supported 00:27:31.103 Format NVM: Supported 00:27:31.103 Firmware Activate/Download: Not Supported 00:27:31.103 Namespace Management: Supported 00:27:31.103 Device Self-Test: Not Supported 00:27:31.103 Directives: Supported 00:27:31.103 NVMe-MI: Not Supported 00:27:31.103 Virtualization Management: Not Supported 00:27:31.103 Doorbell Buffer Config: Supported 00:27:31.103 Get LBA Status Capability: Not Supported 00:27:31.103 Command & Feature Lockdown Capability: Not Supported 00:27:31.103 Abort Command Limit: 4 00:27:31.103 Async Event Request Limit: 4 00:27:31.103 Number of Firmware Slots: N/A 00:27:31.103 Firmware Slot 1 Read-Only: N/A 00:27:31.103 Firmware Activation Without Reset: N/A 00:27:31.103 Multiple Update Detection Support: N/A 00:27:31.103 Firmware Update Granularity: No Information Provided 00:27:31.103 Per-Namespace SMART Log: Yes 00:27:31.103 Asymmetric Namespace Access Log Page: Not Supported 00:27:31.103 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:27:31.103 Command Effects Log Page: Supported 00:27:31.103 Get Log Page Extended Data: Supported 00:27:31.103 Telemetry Log Pages: Not Supported 00:27:31.103 Persistent Event Log Pages: Not Supported 00:27:31.103 Supported Log Pages Log Page: May Support 00:27:31.103 Commands Supported & Effects Log Page: Not Supported 00:27:31.103 Feature Identifiers & Effects Log Page:May Support 00:27:31.103 NVMe-MI Commands & Effects Log Page: May Support 00:27:31.103 Data Area 4 for Telemetry Log: Not Supported 00:27:31.103 Error Log Page Entries Supported: 1 00:27:31.103 Keep Alive: Not Supported 00:27:31.103 00:27:31.103 NVM Command Set Attributes 00:27:31.103 ========================== 00:27:31.103 Submission Queue Entry Size 00:27:31.103 Max: 64 00:27:31.103 Min: 64 00:27:31.103 Completion Queue Entry Size 00:27:31.103 Max: 16 00:27:31.103 Min: 16 00:27:31.103 Number of Namespaces: 256 00:27:31.103 Compare Command: Supported 00:27:31.103 Write Uncorrectable Command: Not Supported 00:27:31.103 Dataset Management Command: Supported 00:27:31.103 Write Zeroes Command: Supported 00:27:31.103 Set Features Save Field: Supported 00:27:31.103 Reservations: Not Supported 00:27:31.103 Timestamp: Supported 00:27:31.103 Copy: Supported 00:27:31.103 Volatile Write Cache: Present 00:27:31.103 Atomic Write Unit (Normal): 1 00:27:31.103 Atomic Write Unit (PFail): 1 00:27:31.103 Atomic Compare & Write Unit: 1 00:27:31.103 Fused Compare & Write: Not Supported 00:27:31.103 Scatter-Gather List 00:27:31.103 SGL Command Set: Supported 00:27:31.103 SGL Keyed: Not Supported 00:27:31.103 SGL Bit Bucket Descriptor: Not Supported 00:27:31.103 SGL Metadata Pointer: Not Supported 00:27:31.103 Oversized SGL: Not Supported 00:27:31.103 SGL Metadata Address: Not Supported 00:27:31.103 SGL Offset: Not Supported 00:27:31.103 Transport SGL Data Block: Not Supported 00:27:31.103 Replay Protected Memory Block: Not Supported 00:27:31.103 00:27:31.103 Firmware Slot Information 00:27:31.103 ========================= 00:27:31.103 Active slot: 1 00:27:31.103 Slot 1 Firmware Revision: 1.0 00:27:31.103 00:27:31.103 00:27:31.103 Commands Supported and Effects 00:27:31.103 ============================== 00:27:31.103 Admin Commands 00:27:31.103 -------------- 00:27:31.103 Delete I/O Submission Queue (00h): Supported 00:27:31.103 Create I/O Submission Queue (01h): Supported 00:27:31.103 Get Log Page (02h): Supported 00:27:31.103 Delete I/O Completion Queue (04h): Supported 00:27:31.103 Create I/O Completion Queue (05h): Supported 00:27:31.103 Identify (06h): Supported 00:27:31.103 Abort (08h): Supported 00:27:31.103 Set Features (09h): Supported 00:27:31.103 Get Features (0Ah): Supported 00:27:31.103 Asynchronous Event Request (0Ch): Supported 00:27:31.103 Namespace Attachment (15h): Supported NS-Inventory-Change 00:27:31.103 Directive Send (19h): Supported 00:27:31.103 Directive Receive (1Ah): Supported 00:27:31.103 Virtualization Management (1Ch): Supported 00:27:31.103 Doorbell Buffer Config (7Ch): Supported 00:27:31.103 Format NVM (80h): Supported LBA-Change 00:27:31.103 I/O Commands 00:27:31.103 ------------ 00:27:31.103 Flush (00h): Supported LBA-Change 00:27:31.103 Write (01h): Supported LBA-Change 00:27:31.103 Read (02h): Supported 00:27:31.103 Compare (05h): Supported 00:27:31.103 Write Zeroes (08h): Supported LBA-Change 00:27:31.103 Dataset Management (09h): Supported LBA-Change 00:27:31.103 Unknown (0Ch): Supported 00:27:31.103 Unknown (12h): Supported 00:27:31.103 Copy (19h): Supported LBA-Change 00:27:31.103 Unknown (1Dh): Supported LBA-Change 00:27:31.103 00:27:31.103 Error Log 00:27:31.103 ========= 00:27:31.103 00:27:31.103 Arbitration 00:27:31.103 =========== 00:27:31.103 Arbitration Burst: no limit 00:27:31.103 00:27:31.103 Power Management 00:27:31.103 ================ 00:27:31.103 Number of Power States: 1 00:27:31.103 Current Power State: Power State #0 00:27:31.103 Power State #0: 00:27:31.103 Max Power: 25.00 W 00:27:31.103 Non-Operational State: Operational 00:27:31.103 Entry Latency: 16 microseconds 00:27:31.103 Exit Latency: 4 microseconds 00:27:31.103 Relative Read Throughput: 0 00:27:31.103 Relative Read Latency: 0 00:27:31.103 Relative Write Throughput: 0 00:27:31.103 Relative Write Latency: 0 00:27:31.103 Idle Power: Not Reported 00:27:31.103 Active Power: Not Reported 00:27:31.103 Non-Operational Permissive Mode: Not Supported 00:27:31.103 00:27:31.103 Health Information 00:27:31.103 ================== 00:27:31.103 Critical Warnings: 00:27:31.103 Available Spare Space: OK 00:27:31.103 Temperature: OK 00:27:31.103 Device Reliability: OK 00:27:31.103 Read Only: No 00:27:31.103 Volatile Memory Backup: OK 00:27:31.103 Current Temperature: 323 Kelvin (50 Celsius) 00:27:31.103 Temperature Threshold: 343 Kelvin (70 Celsius) 00:27:31.103 Available Spare: 0% 00:27:31.103 Available Spare Threshold: 0% 00:27:31.103 Life Percentage Used: 0% 00:27:31.103 Data Units Read: 2284 00:27:31.103 Data Units Written: 1964 00:27:31.103 Host Read Commands: 95707 00:27:31.103 Host Write Commands: 91477 00:27:31.103 Controller Busy Time: 0 minutes 00:27:31.103 Power Cycles: 0 00:27:31.103 Power On Hours: 0 hours 00:27:31.103 Unsafe Shutdowns: 0 00:27:31.103 Unrecoverable Media Errors: 0 00:27:31.103 Lifetime Error Log Entries: 0 00:27:31.104 Warning Temperature Time: 0 minutes 00:27:31.104 Critical Temperature Time: 0 minutes 00:27:31.104 00:27:31.104 Number of Queues 00:27:31.104 ================ 00:27:31.104 Number of I/O Submission Queues: 64 00:27:31.104 Number of I/O Completion Queues: 64 00:27:31.104 00:27:31.104 ZNS Specific Controller Data 00:27:31.104 ============================ 00:27:31.104 Zone Append Size Limit: 0 00:27:31.104 00:27:31.104 00:27:31.104 Active Namespaces 00:27:31.104 ================= 00:27:31.104 Namespace ID:1 00:27:31.104 Error Recovery Timeout: Unlimited 00:27:31.104 Command Set Identifier: NVM (00h) 00:27:31.104 Deallocate: Supported 00:27:31.104 Deallocated/Unwritten Error: Supported 00:27:31.104 Deallocated Read Value: All 0x00 00:27:31.104 Deallocate in Write Zeroes: Not Supported 00:27:31.104 Deallocated Guard Field: 0xFFFF 00:27:31.104 Flush: Supported 00:27:31.104 Reservation: Not Supported 00:27:31.104 Namespace Sharing Capabilities: Private 00:27:31.104 Size (in LBAs): 1048576 (4GiB) 00:27:31.104 Capacity (in LBAs): 1048576 (4GiB) 00:27:31.104 Utilization (in LBAs): 1048576 (4GiB) 00:27:31.104 Thin Provisioning: Not Supported 00:27:31.104 Per-NS Atomic Units: No 00:27:31.104 Maximum Single Source Range Length: 128 00:27:31.104 Maximum Copy Length: 128 00:27:31.104 Maximum Source Range Count: 128 00:27:31.104 NGUID/EUI64 Never Reused: No 00:27:31.104 Namespace Write Protected: No 00:27:31.104 Number of LBA Formats: 8 00:27:31.104 Current LBA Format: LBA Format #04 00:27:31.104 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:31.104 LBA Format #01: Data Size: 512 Metadata Size: 8 00:27:31.104 LBA Format #02: Data Size: 512 Metadata Size: 16 00:27:31.104 LBA Format #03: Data Size: 512 Metadata Size: 64 00:27:31.104 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:27:31.104 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:27:31.104 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:27:31.104 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:27:31.104 00:27:31.104 NVM Specific Namespace Data 00:27:31.104 =========================== 00:27:31.104 Logical Block Storage Tag Mask: 0 00:27:31.104 Protection Information Capabilities: 00:27:31.104 16b Guard Protection Information Storage Tag Support: No 00:27:31.104 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:27:31.104 Storage Tag Check Read Support: No 00:27:31.104 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:31.104 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:31.104 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:31.104 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:31.104 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:31.104 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:31.104 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:31.104 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:31.104 Namespace ID:2 00:27:31.104 Error Recovery Timeout: Unlimited 00:27:31.104 Command Set Identifier: NVM (00h) 00:27:31.104 Deallocate: Supported 00:27:31.104 Deallocated/Unwritten Error: Supported 00:27:31.104 Deallocated Read Value: All 0x00 00:27:31.104 Deallocate in Write Zeroes: Not Supported 00:27:31.104 Deallocated Guard Field: 0xFFFF 00:27:31.104 Flush: Supported 00:27:31.104 Reservation: Not Supported 00:27:31.104 Namespace Sharing Capabilities: Private 00:27:31.104 Size (in LBAs): 1048576 (4GiB) 00:27:31.104 Capacity (in LBAs): 1048576 (4GiB) 00:27:31.104 Utilization (in LBAs): 1048576 (4GiB) 00:27:31.104 Thin Provisioning: Not Supported 00:27:31.104 Per-NS Atomic Units: No 00:27:31.104 Maximum Single Source Range Length: 128 00:27:31.104 Maximum Copy Length: 128 00:27:31.104 Maximum Source Range Count: 128 00:27:31.104 NGUID/EUI64 Never Reused: No 00:27:31.104 Namespace Write Protected: No 00:27:31.104 Number of LBA Formats: 8 00:27:31.104 Current LBA Format: LBA Format #04 00:27:31.104 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:31.104 LBA Format #01: Data Size: 512 Metadata Size: 8 00:27:31.104 LBA Format #02: Data Size: 512 Metadata Size: 16 00:27:31.104 LBA Format #03: Data Size: 512 Metadata Size: 64 00:27:31.104 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:27:31.104 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:27:31.104 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:27:31.104 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:27:31.104 00:27:31.104 NVM Specific Namespace Data 00:27:31.104 =========================== 00:27:31.104 Logical Block Storage Tag Mask: 0 00:27:31.104 Protection Information Capabilities: 00:27:31.104 16b Guard Protection Information Storage Tag Support: No 00:27:31.104 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:27:31.104 Storage Tag Check Read Support: No 00:27:31.104 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:31.104 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:31.104 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:31.104 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:31.104 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:31.104 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:31.104 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:31.104 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:31.104 Namespace ID:3 00:27:31.104 Error Recovery Timeout: Unlimited 00:27:31.104 Command Set Identifier: NVM (00h) 00:27:31.104 Deallocate: Supported 00:27:31.104 Deallocated/Unwritten Error: Supported 00:27:31.104 Deallocated Read Value: All 0x00 00:27:31.104 Deallocate in Write Zeroes: Not Supported 00:27:31.104 Deallocated Guard Field: 0xFFFF 00:27:31.104 Flush: Supported 00:27:31.104 Reservation: Not Supported 00:27:31.104 Namespace Sharing Capabilities: Private 00:27:31.104 Size (in LBAs): 1048576 (4GiB) 00:27:31.104 Capacity (in LBAs): 1048576 (4GiB) 00:27:31.104 Utilization (in LBAs): 1048576 (4GiB) 00:27:31.104 Thin Provisioning: Not Supported 00:27:31.104 Per-NS Atomic Units: No 00:27:31.104 Maximum Single Source Range Length: 128 00:27:31.104 Maximum Copy Length: 128 00:27:31.104 Maximum Source Range Count: 128 00:27:31.104 NGUID/EUI64 Never Reused: No 00:27:31.104 Namespace Write Protected: No 00:27:31.104 Number of LBA Formats: 8 00:27:31.104 Current LBA Format: LBA Format #04 00:27:31.104 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:31.104 LBA Format #01: Data Size: 512 Metadata Size: 8 00:27:31.104 LBA Format #02: Data Size: 512 Metadata Size: 16 00:27:31.104 LBA Format #03: Data Size: 512 Metadata Size: 64 00:27:31.104 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:27:31.104 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:27:31.104 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:27:31.104 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:27:31.104 00:27:31.104 NVM Specific Namespace Data 00:27:31.104 =========================== 00:27:31.104 Logical Block Storage Tag Mask: 0 00:27:31.104 Protection Information Capabilities: 00:27:31.104 16b Guard Protection Information Storage Tag Support: No 00:27:31.104 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:27:31.362 Storage Tag Check Read Support: No 00:27:31.362 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:31.362 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:31.362 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:31.362 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:31.362 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:31.362 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:31.362 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:31.362 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:31.362 07:36:09 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:27:31.362 07:36:09 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:27:31.621 ===================================================== 00:27:31.621 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:27:31.621 ===================================================== 00:27:31.621 Controller Capabilities/Features 00:27:31.621 ================================ 00:27:31.621 Vendor ID: 1b36 00:27:31.621 Subsystem Vendor ID: 1af4 00:27:31.621 Serial Number: 12343 00:27:31.621 Model Number: QEMU NVMe Ctrl 00:27:31.621 Firmware Version: 8.0.0 00:27:31.621 Recommended Arb Burst: 6 00:27:31.621 IEEE OUI Identifier: 00 54 52 00:27:31.621 Multi-path I/O 00:27:31.621 May have multiple subsystem ports: No 00:27:31.621 May have multiple controllers: Yes 00:27:31.621 Associated with SR-IOV VF: No 00:27:31.621 Max Data Transfer Size: 524288 00:27:31.621 Max Number of Namespaces: 256 00:27:31.621 Max Number of I/O Queues: 64 00:27:31.621 NVMe Specification Version (VS): 1.4 00:27:31.621 NVMe Specification Version (Identify): 1.4 00:27:31.621 Maximum Queue Entries: 2048 00:27:31.621 Contiguous Queues Required: Yes 00:27:31.621 Arbitration Mechanisms Supported 00:27:31.621 Weighted Round Robin: Not Supported 00:27:31.621 Vendor Specific: Not Supported 00:27:31.621 Reset Timeout: 7500 ms 00:27:31.621 Doorbell Stride: 4 bytes 00:27:31.621 NVM Subsystem Reset: Not Supported 00:27:31.621 Command Sets Supported 00:27:31.621 NVM Command Set: Supported 00:27:31.621 Boot Partition: Not Supported 00:27:31.621 Memory Page Size Minimum: 4096 bytes 00:27:31.621 Memory Page Size Maximum: 65536 bytes 00:27:31.621 Persistent Memory Region: Not Supported 00:27:31.621 Optional Asynchronous Events Supported 00:27:31.621 Namespace Attribute Notices: Supported 00:27:31.621 Firmware Activation Notices: Not Supported 00:27:31.621 ANA Change Notices: Not Supported 00:27:31.621 PLE Aggregate Log Change Notices: Not Supported 00:27:31.621 LBA Status Info Alert Notices: Not Supported 00:27:31.621 EGE Aggregate Log Change Notices: Not Supported 00:27:31.621 Normal NVM Subsystem Shutdown event: Not Supported 00:27:31.621 Zone Descriptor Change Notices: Not Supported 00:27:31.621 Discovery Log Change Notices: Not Supported 00:27:31.621 Controller Attributes 00:27:31.621 128-bit Host Identifier: Not Supported 00:27:31.621 Non-Operational Permissive Mode: Not Supported 00:27:31.621 NVM Sets: Not Supported 00:27:31.621 Read Recovery Levels: Not Supported 00:27:31.621 Endurance Groups: Supported 00:27:31.621 Predictable Latency Mode: Not Supported 00:27:31.621 Traffic Based Keep ALive: Not Supported 00:27:31.621 Namespace Granularity: Not Supported 00:27:31.621 SQ Associations: Not Supported 00:27:31.621 UUID List: Not Supported 00:27:31.621 Multi-Domain Subsystem: Not Supported 00:27:31.621 Fixed Capacity Management: Not Supported 00:27:31.621 Variable Capacity Management: Not Supported 00:27:31.621 Delete Endurance Group: Not Supported 00:27:31.621 Delete NVM Set: Not Supported 00:27:31.621 Extended LBA Formats Supported: Supported 00:27:31.621 Flexible Data Placement Supported: Supported 00:27:31.621 00:27:31.621 Controller Memory Buffer Support 00:27:31.621 ================================ 00:27:31.621 Supported: No 00:27:31.621 00:27:31.621 Persistent Memory Region Support 00:27:31.621 ================================ 00:27:31.621 Supported: No 00:27:31.621 00:27:31.621 Admin Command Set Attributes 00:27:31.621 ============================ 00:27:31.621 Security Send/Receive: Not Supported 00:27:31.621 Format NVM: Supported 00:27:31.621 Firmware Activate/Download: Not Supported 00:27:31.621 Namespace Management: Supported 00:27:31.621 Device Self-Test: Not Supported 00:27:31.621 Directives: Supported 00:27:31.621 NVMe-MI: Not Supported 00:27:31.621 Virtualization Management: Not Supported 00:27:31.621 Doorbell Buffer Config: Supported 00:27:31.621 Get LBA Status Capability: Not Supported 00:27:31.621 Command & Feature Lockdown Capability: Not Supported 00:27:31.621 Abort Command Limit: 4 00:27:31.621 Async Event Request Limit: 4 00:27:31.621 Number of Firmware Slots: N/A 00:27:31.621 Firmware Slot 1 Read-Only: N/A 00:27:31.621 Firmware Activation Without Reset: N/A 00:27:31.621 Multiple Update Detection Support: N/A 00:27:31.621 Firmware Update Granularity: No Information Provided 00:27:31.621 Per-Namespace SMART Log: Yes 00:27:31.621 Asymmetric Namespace Access Log Page: Not Supported 00:27:31.621 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:27:31.621 Command Effects Log Page: Supported 00:27:31.621 Get Log Page Extended Data: Supported 00:27:31.621 Telemetry Log Pages: Not Supported 00:27:31.621 Persistent Event Log Pages: Not Supported 00:27:31.621 Supported Log Pages Log Page: May Support 00:27:31.621 Commands Supported & Effects Log Page: Not Supported 00:27:31.621 Feature Identifiers & Effects Log Page:May Support 00:27:31.621 NVMe-MI Commands & Effects Log Page: May Support 00:27:31.621 Data Area 4 for Telemetry Log: Not Supported 00:27:31.621 Error Log Page Entries Supported: 1 00:27:31.621 Keep Alive: Not Supported 00:27:31.621 00:27:31.621 NVM Command Set Attributes 00:27:31.621 ========================== 00:27:31.621 Submission Queue Entry Size 00:27:31.621 Max: 64 00:27:31.621 Min: 64 00:27:31.621 Completion Queue Entry Size 00:27:31.621 Max: 16 00:27:31.621 Min: 16 00:27:31.621 Number of Namespaces: 256 00:27:31.621 Compare Command: Supported 00:27:31.621 Write Uncorrectable Command: Not Supported 00:27:31.621 Dataset Management Command: Supported 00:27:31.621 Write Zeroes Command: Supported 00:27:31.621 Set Features Save Field: Supported 00:27:31.621 Reservations: Not Supported 00:27:31.621 Timestamp: Supported 00:27:31.621 Copy: Supported 00:27:31.621 Volatile Write Cache: Present 00:27:31.621 Atomic Write Unit (Normal): 1 00:27:31.621 Atomic Write Unit (PFail): 1 00:27:31.621 Atomic Compare & Write Unit: 1 00:27:31.621 Fused Compare & Write: Not Supported 00:27:31.621 Scatter-Gather List 00:27:31.621 SGL Command Set: Supported 00:27:31.621 SGL Keyed: Not Supported 00:27:31.621 SGL Bit Bucket Descriptor: Not Supported 00:27:31.621 SGL Metadata Pointer: Not Supported 00:27:31.621 Oversized SGL: Not Supported 00:27:31.621 SGL Metadata Address: Not Supported 00:27:31.621 SGL Offset: Not Supported 00:27:31.621 Transport SGL Data Block: Not Supported 00:27:31.621 Replay Protected Memory Block: Not Supported 00:27:31.621 00:27:31.621 Firmware Slot Information 00:27:31.621 ========================= 00:27:31.621 Active slot: 1 00:27:31.621 Slot 1 Firmware Revision: 1.0 00:27:31.621 00:27:31.621 00:27:31.621 Commands Supported and Effects 00:27:31.621 ============================== 00:27:31.621 Admin Commands 00:27:31.621 -------------- 00:27:31.621 Delete I/O Submission Queue (00h): Supported 00:27:31.621 Create I/O Submission Queue (01h): Supported 00:27:31.621 Get Log Page (02h): Supported 00:27:31.621 Delete I/O Completion Queue (04h): Supported 00:27:31.621 Create I/O Completion Queue (05h): Supported 00:27:31.621 Identify (06h): Supported 00:27:31.621 Abort (08h): Supported 00:27:31.621 Set Features (09h): Supported 00:27:31.621 Get Features (0Ah): Supported 00:27:31.621 Asynchronous Event Request (0Ch): Supported 00:27:31.621 Namespace Attachment (15h): Supported NS-Inventory-Change 00:27:31.621 Directive Send (19h): Supported 00:27:31.621 Directive Receive (1Ah): Supported 00:27:31.621 Virtualization Management (1Ch): Supported 00:27:31.621 Doorbell Buffer Config (7Ch): Supported 00:27:31.621 Format NVM (80h): Supported LBA-Change 00:27:31.621 I/O Commands 00:27:31.621 ------------ 00:27:31.621 Flush (00h): Supported LBA-Change 00:27:31.621 Write (01h): Supported LBA-Change 00:27:31.621 Read (02h): Supported 00:27:31.621 Compare (05h): Supported 00:27:31.621 Write Zeroes (08h): Supported LBA-Change 00:27:31.621 Dataset Management (09h): Supported LBA-Change 00:27:31.621 Unknown (0Ch): Supported 00:27:31.621 Unknown (12h): Supported 00:27:31.621 Copy (19h): Supported LBA-Change 00:27:31.621 Unknown (1Dh): Supported LBA-Change 00:27:31.621 00:27:31.621 Error Log 00:27:31.621 ========= 00:27:31.621 00:27:31.621 Arbitration 00:27:31.621 =========== 00:27:31.621 Arbitration Burst: no limit 00:27:31.621 00:27:31.621 Power Management 00:27:31.621 ================ 00:27:31.621 Number of Power States: 1 00:27:31.621 Current Power State: Power State #0 00:27:31.621 Power State #0: 00:27:31.621 Max Power: 25.00 W 00:27:31.621 Non-Operational State: Operational 00:27:31.621 Entry Latency: 16 microseconds 00:27:31.621 Exit Latency: 4 microseconds 00:27:31.621 Relative Read Throughput: 0 00:27:31.622 Relative Read Latency: 0 00:27:31.622 Relative Write Throughput: 0 00:27:31.622 Relative Write Latency: 0 00:27:31.622 Idle Power: Not Reported 00:27:31.622 Active Power: Not Reported 00:27:31.622 Non-Operational Permissive Mode: Not Supported 00:27:31.622 00:27:31.622 Health Information 00:27:31.622 ================== 00:27:31.622 Critical Warnings: 00:27:31.622 Available Spare Space: OK 00:27:31.622 Temperature: OK 00:27:31.622 Device Reliability: OK 00:27:31.622 Read Only: No 00:27:31.622 Volatile Memory Backup: OK 00:27:31.622 Current Temperature: 323 Kelvin (50 Celsius) 00:27:31.622 Temperature Threshold: 343 Kelvin (70 Celsius) 00:27:31.622 Available Spare: 0% 00:27:31.622 Available Spare Threshold: 0% 00:27:31.622 Life Percentage Used: 0% 00:27:31.622 Data Units Read: 826 00:27:31.622 Data Units Written: 720 00:27:31.622 Host Read Commands: 32466 00:27:31.622 Host Write Commands: 31056 00:27:31.622 Controller Busy Time: 0 minutes 00:27:31.622 Power Cycles: 0 00:27:31.622 Power On Hours: 0 hours 00:27:31.622 Unsafe Shutdowns: 0 00:27:31.622 Unrecoverable Media Errors: 0 00:27:31.622 Lifetime Error Log Entries: 0 00:27:31.622 Warning Temperature Time: 0 minutes 00:27:31.622 Critical Temperature Time: 0 minutes 00:27:31.622 00:27:31.622 Number of Queues 00:27:31.622 ================ 00:27:31.622 Number of I/O Submission Queues: 64 00:27:31.622 Number of I/O Completion Queues: 64 00:27:31.622 00:27:31.622 ZNS Specific Controller Data 00:27:31.622 ============================ 00:27:31.622 Zone Append Size Limit: 0 00:27:31.622 00:27:31.622 00:27:31.622 Active Namespaces 00:27:31.622 ================= 00:27:31.622 Namespace ID:1 00:27:31.622 Error Recovery Timeout: Unlimited 00:27:31.622 Command Set Identifier: NVM (00h) 00:27:31.622 Deallocate: Supported 00:27:31.622 Deallocated/Unwritten Error: Supported 00:27:31.622 Deallocated Read Value: All 0x00 00:27:31.622 Deallocate in Write Zeroes: Not Supported 00:27:31.622 Deallocated Guard Field: 0xFFFF 00:27:31.622 Flush: Supported 00:27:31.622 Reservation: Not Supported 00:27:31.622 Namespace Sharing Capabilities: Multiple Controllers 00:27:31.622 Size (in LBAs): 262144 (1GiB) 00:27:31.622 Capacity (in LBAs): 262144 (1GiB) 00:27:31.622 Utilization (in LBAs): 262144 (1GiB) 00:27:31.622 Thin Provisioning: Not Supported 00:27:31.622 Per-NS Atomic Units: No 00:27:31.622 Maximum Single Source Range Length: 128 00:27:31.622 Maximum Copy Length: 128 00:27:31.622 Maximum Source Range Count: 128 00:27:31.622 NGUID/EUI64 Never Reused: No 00:27:31.622 Namespace Write Protected: No 00:27:31.622 Endurance group ID: 1 00:27:31.622 Number of LBA Formats: 8 00:27:31.622 Current LBA Format: LBA Format #04 00:27:31.622 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:31.622 LBA Format #01: Data Size: 512 Metadata Size: 8 00:27:31.622 LBA Format #02: Data Size: 512 Metadata Size: 16 00:27:31.622 LBA Format #03: Data Size: 512 Metadata Size: 64 00:27:31.622 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:27:31.622 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:27:31.622 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:27:31.622 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:27:31.622 00:27:31.622 Get Feature FDP: 00:27:31.622 ================ 00:27:31.622 Enabled: Yes 00:27:31.622 FDP configuration index: 0 00:27:31.622 00:27:31.622 FDP configurations log page 00:27:31.622 =========================== 00:27:31.622 Number of FDP configurations: 1 00:27:31.622 Version: 0 00:27:31.622 Size: 112 00:27:31.622 FDP Configuration Descriptor: 0 00:27:31.622 Descriptor Size: 96 00:27:31.622 Reclaim Group Identifier format: 2 00:27:31.622 FDP Volatile Write Cache: Not Present 00:27:31.622 FDP Configuration: Valid 00:27:31.622 Vendor Specific Size: 0 00:27:31.622 Number of Reclaim Groups: 2 00:27:31.622 Number of Recalim Unit Handles: 8 00:27:31.622 Max Placement Identifiers: 128 00:27:31.622 Number of Namespaces Suppprted: 256 00:27:31.622 Reclaim unit Nominal Size: 6000000 bytes 00:27:31.622 Estimated Reclaim Unit Time Limit: Not Reported 00:27:31.622 RUH Desc #000: RUH Type: Initially Isolated 00:27:31.622 RUH Desc #001: RUH Type: Initially Isolated 00:27:31.622 RUH Desc #002: RUH Type: Initially Isolated 00:27:31.622 RUH Desc #003: RUH Type: Initially Isolated 00:27:31.622 RUH Desc #004: RUH Type: Initially Isolated 00:27:31.622 RUH Desc #005: RUH Type: Initially Isolated 00:27:31.622 RUH Desc #006: RUH Type: Initially Isolated 00:27:31.622 RUH Desc #007: RUH Type: Initially Isolated 00:27:31.622 00:27:31.622 FDP reclaim unit handle usage log page 00:27:31.622 ====================================== 00:27:31.622 Number of Reclaim Unit Handles: 8 00:27:31.622 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:27:31.622 RUH Usage Desc #001: RUH Attributes: Unused 00:27:31.622 RUH Usage Desc #002: RUH Attributes: Unused 00:27:31.622 RUH Usage Desc #003: RUH Attributes: Unused 00:27:31.622 RUH Usage Desc #004: RUH Attributes: Unused 00:27:31.622 RUH Usage Desc #005: RUH Attributes: Unused 00:27:31.622 RUH Usage Desc #006: RUH Attributes: Unused 00:27:31.622 RUH Usage Desc #007: RUH Attributes: Unused 00:27:31.622 00:27:31.622 FDP statistics log page 00:27:31.622 ======================= 00:27:31.622 Host bytes with metadata written: 448372736 00:27:31.622 Media bytes with metadata written: 448438272 00:27:31.622 Media bytes erased: 0 00:27:31.622 00:27:31.622 FDP events log page 00:27:31.622 =================== 00:27:31.622 Number of FDP events: 0 00:27:31.622 00:27:31.622 NVM Specific Namespace Data 00:27:31.622 =========================== 00:27:31.622 Logical Block Storage Tag Mask: 0 00:27:31.622 Protection Information Capabilities: 00:27:31.622 16b Guard Protection Information Storage Tag Support: No 00:27:31.622 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:27:31.622 Storage Tag Check Read Support: No 00:27:31.622 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:31.622 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:31.622 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:31.622 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:31.622 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:31.622 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:31.622 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:31.622 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:27:31.622 00:27:31.622 real 0m1.607s 00:27:31.622 user 0m0.655s 00:27:31.622 sys 0m0.759s 00:27:31.622 07:36:10 nvme.nvme_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:31.622 ************************************ 00:27:31.622 END TEST nvme_identify 00:27:31.622 ************************************ 00:27:31.622 07:36:10 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:27:31.622 07:36:10 nvme -- common/autotest_common.sh@1142 -- # return 0 00:27:31.622 07:36:10 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:27:31.622 07:36:10 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:31.622 07:36:10 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:31.622 07:36:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:27:31.622 ************************************ 00:27:31.622 START TEST nvme_perf 00:27:31.622 ************************************ 00:27:31.622 07:36:10 nvme.nvme_perf -- common/autotest_common.sh@1123 -- # nvme_perf 00:27:31.622 07:36:10 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:27:33.030 Initializing NVMe Controllers 00:27:33.030 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:27:33.030 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:27:33.030 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:27:33.030 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:27:33.030 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:27:33.030 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:27:33.030 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:27:33.030 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:27:33.030 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:27:33.030 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:27:33.030 Initialization complete. Launching workers. 00:27:33.030 ======================================================== 00:27:33.030 Latency(us) 00:27:33.030 Device Information : IOPS MiB/s Average min max 00:27:33.030 PCIE (0000:00:10.0) NSID 1 from core 0: 12385.19 145.14 10355.97 8116.88 44917.54 00:27:33.030 PCIE (0000:00:11.0) NSID 1 from core 0: 12385.19 145.14 10329.33 8174.73 41924.09 00:27:33.030 PCIE (0000:00:13.0) NSID 1 from core 0: 12385.19 145.14 10299.98 8099.19 39392.62 00:27:33.030 PCIE (0000:00:12.0) NSID 1 from core 0: 12385.19 145.14 10270.16 8144.39 36288.50 00:27:33.030 PCIE (0000:00:12.0) NSID 2 from core 0: 12385.19 145.14 10239.56 8223.27 33214.47 00:27:33.030 PCIE (0000:00:12.0) NSID 3 from core 0: 12385.19 145.14 10209.31 8166.28 29926.76 00:27:33.030 ======================================================== 00:27:33.030 Total : 74311.11 870.83 10284.05 8099.19 44917.54 00:27:33.030 00:27:33.030 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:27:33.030 ================================================================================= 00:27:33.030 1.00000% : 8519.680us 00:27:33.030 10.00000% : 8996.305us 00:27:33.030 25.00000% : 9413.353us 00:27:33.030 50.00000% : 9949.556us 00:27:33.030 75.00000% : 10545.338us 00:27:33.030 90.00000% : 11260.276us 00:27:33.030 95.00000% : 12332.684us 00:27:33.030 98.00000% : 13762.560us 00:27:33.030 99.00000% : 33602.095us 00:27:33.030 99.50000% : 42181.353us 00:27:33.030 99.90000% : 44326.167us 00:27:33.030 99.99000% : 45041.105us 00:27:33.030 99.99900% : 45041.105us 00:27:33.030 99.99990% : 45041.105us 00:27:33.030 99.99999% : 45041.105us 00:27:33.030 00:27:33.030 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:27:33.030 ================================================================================= 00:27:33.030 1.00000% : 8638.836us 00:27:33.030 10.00000% : 9055.884us 00:27:33.030 25.00000% : 9472.931us 00:27:33.030 50.00000% : 9949.556us 00:27:33.030 75.00000% : 10485.760us 00:27:33.030 90.00000% : 11200.698us 00:27:33.030 95.00000% : 12332.684us 00:27:33.030 98.00000% : 13702.982us 00:27:33.030 99.00000% : 31218.967us 00:27:33.030 99.50000% : 39321.600us 00:27:33.030 99.90000% : 41466.415us 00:27:33.030 99.99000% : 41943.040us 00:27:33.030 99.99900% : 41943.040us 00:27:33.030 99.99990% : 41943.040us 00:27:33.030 99.99999% : 41943.040us 00:27:33.030 00:27:33.030 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:27:33.030 ================================================================================= 00:27:33.030 1.00000% : 8638.836us 00:27:33.030 10.00000% : 9055.884us 00:27:33.030 25.00000% : 9472.931us 00:27:33.030 50.00000% : 9949.556us 00:27:33.030 75.00000% : 10545.338us 00:27:33.030 90.00000% : 11200.698us 00:27:33.030 95.00000% : 12273.105us 00:27:33.030 98.00000% : 13524.247us 00:27:33.030 99.00000% : 28359.215us 00:27:33.030 99.50000% : 36700.160us 00:27:33.030 99.90000% : 38844.975us 00:27:33.030 99.99000% : 39559.913us 00:27:33.030 99.99900% : 39559.913us 00:27:33.030 99.99990% : 39559.913us 00:27:33.030 99.99999% : 39559.913us 00:27:33.030 00:27:33.030 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:27:33.030 ================================================================================= 00:27:33.030 1.00000% : 8638.836us 00:27:33.030 10.00000% : 9055.884us 00:27:33.030 25.00000% : 9472.931us 00:27:33.030 50.00000% : 9949.556us 00:27:33.030 75.00000% : 10545.338us 00:27:33.030 90.00000% : 11260.276us 00:27:33.030 95.00000% : 12213.527us 00:27:33.030 98.00000% : 13583.825us 00:27:33.030 99.00000% : 25261.149us 00:27:33.030 99.50000% : 33602.095us 00:27:33.030 99.90000% : 35985.222us 00:27:33.030 99.99000% : 36461.847us 00:27:33.030 99.99900% : 36461.847us 00:27:33.030 99.99990% : 36461.847us 00:27:33.030 99.99999% : 36461.847us 00:27:33.030 00:27:33.030 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:27:33.030 ================================================================================= 00:27:33.030 1.00000% : 8638.836us 00:27:33.030 10.00000% : 9055.884us 00:27:33.030 25.00000% : 9472.931us 00:27:33.030 50.00000% : 9949.556us 00:27:33.030 75.00000% : 10545.338us 00:27:33.030 90.00000% : 11260.276us 00:27:33.030 95.00000% : 12213.527us 00:27:33.030 98.00000% : 13702.982us 00:27:33.030 99.00000% : 21924.771us 00:27:33.030 99.50000% : 30265.716us 00:27:33.030 99.90000% : 32887.156us 00:27:33.030 99.99000% : 33363.782us 00:27:33.030 99.99900% : 33363.782us 00:27:33.030 99.99990% : 33363.782us 00:27:33.030 99.99999% : 33363.782us 00:27:33.030 00:27:33.030 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:27:33.030 ================================================================================= 00:27:33.030 1.00000% : 8638.836us 00:27:33.030 10.00000% : 9055.884us 00:27:33.030 25.00000% : 9472.931us 00:27:33.030 50.00000% : 9949.556us 00:27:33.030 75.00000% : 10545.338us 00:27:33.030 90.00000% : 11260.276us 00:27:33.030 95.00000% : 12332.684us 00:27:33.030 98.00000% : 13881.716us 00:27:33.030 99.00000% : 18826.705us 00:27:33.030 99.50000% : 27167.651us 00:27:33.030 99.90000% : 29431.622us 00:27:33.030 99.99000% : 29908.247us 00:27:33.030 99.99900% : 30027.404us 00:27:33.030 99.99990% : 30027.404us 00:27:33.030 99.99999% : 30027.404us 00:27:33.030 00:27:33.030 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:27:33.030 ============================================================================== 00:27:33.030 Range in us Cumulative IO count 00:27:33.030 8102.633 - 8162.211: 0.0322% ( 4) 00:27:33.030 8162.211 - 8221.789: 0.1047% ( 9) 00:27:33.030 8221.789 - 8281.367: 0.2014% ( 12) 00:27:33.030 8281.367 - 8340.945: 0.3544% ( 19) 00:27:33.030 8340.945 - 8400.524: 0.5316% ( 22) 00:27:33.030 8400.524 - 8460.102: 0.7812% ( 31) 00:27:33.030 8460.102 - 8519.680: 1.1517% ( 46) 00:27:33.030 8519.680 - 8579.258: 1.6269% ( 59) 00:27:33.030 8579.258 - 8638.836: 2.4001% ( 96) 00:27:33.030 8638.836 - 8698.415: 3.5116% ( 138) 00:27:33.030 8698.415 - 8757.993: 4.5586% ( 130) 00:27:33.030 8757.993 - 8817.571: 5.9842% ( 177) 00:27:33.031 8817.571 - 8877.149: 7.3615% ( 171) 00:27:33.031 8877.149 - 8936.727: 8.9481% ( 197) 00:27:33.031 8936.727 - 8996.305: 10.6153% ( 207) 00:27:33.031 8996.305 - 9055.884: 12.4275% ( 225) 00:27:33.031 9055.884 - 9115.462: 14.3363% ( 237) 00:27:33.031 9115.462 - 9175.040: 16.3418% ( 249) 00:27:33.031 9175.040 - 9234.618: 18.5567% ( 275) 00:27:33.031 9234.618 - 9294.196: 20.8199% ( 281) 00:27:33.031 9294.196 - 9353.775: 23.1878% ( 294) 00:27:33.031 9353.775 - 9413.353: 25.8537% ( 331) 00:27:33.031 9413.353 - 9472.931: 28.5116% ( 330) 00:27:33.031 9472.931 - 9532.509: 31.1936% ( 333) 00:27:33.031 9532.509 - 9592.087: 33.7065% ( 312) 00:27:33.031 9592.087 - 9651.665: 36.4771% ( 344) 00:27:33.031 9651.665 - 9711.244: 39.2236% ( 341) 00:27:33.031 9711.244 - 9770.822: 42.2439% ( 375) 00:27:33.031 9770.822 - 9830.400: 45.3608% ( 387) 00:27:33.031 9830.400 - 9889.978: 48.2684% ( 361) 00:27:33.031 9889.978 - 9949.556: 51.2081% ( 365) 00:27:33.031 9949.556 - 10009.135: 54.2928% ( 383) 00:27:33.031 10009.135 - 10068.713: 57.1440% ( 354) 00:27:33.031 10068.713 - 10128.291: 59.9307% ( 346) 00:27:33.031 10128.291 - 10187.869: 62.5000% ( 319) 00:27:33.031 10187.869 - 10247.447: 65.0854% ( 321) 00:27:33.031 10247.447 - 10307.025: 67.4936% ( 299) 00:27:33.031 10307.025 - 10366.604: 69.7004% ( 274) 00:27:33.031 10366.604 - 10426.182: 71.8831% ( 271) 00:27:33.031 10426.182 - 10485.760: 73.7838% ( 236) 00:27:33.031 10485.760 - 10545.338: 75.6685% ( 234) 00:27:33.031 10545.338 - 10604.916: 77.5048% ( 228) 00:27:33.031 10604.916 - 10664.495: 79.1318% ( 202) 00:27:33.031 10664.495 - 10724.073: 80.7990% ( 207) 00:27:33.031 10724.073 - 10783.651: 82.2165% ( 176) 00:27:33.031 10783.651 - 10843.229: 83.6421% ( 177) 00:27:33.031 10843.229 - 10902.807: 84.9388% ( 161) 00:27:33.031 10902.807 - 10962.385: 85.9294% ( 123) 00:27:33.031 10962.385 - 11021.964: 86.9443% ( 126) 00:27:33.031 11021.964 - 11081.542: 87.8624% ( 114) 00:27:33.031 11081.542 - 11141.120: 88.7242% ( 107) 00:27:33.031 11141.120 - 11200.698: 89.5216% ( 99) 00:27:33.031 11200.698 - 11260.276: 90.0693% ( 68) 00:27:33.031 11260.276 - 11319.855: 90.6008% ( 66) 00:27:33.031 11319.855 - 11379.433: 91.0760% ( 59) 00:27:33.031 11379.433 - 11439.011: 91.4787% ( 50) 00:27:33.031 11439.011 - 11498.589: 91.8653% ( 48) 00:27:33.031 11498.589 - 11558.167: 92.1311% ( 33) 00:27:33.031 11558.167 - 11617.745: 92.4211% ( 36) 00:27:33.031 11617.745 - 11677.324: 92.6949% ( 34) 00:27:33.031 11677.324 - 11736.902: 93.0010% ( 38) 00:27:33.031 11736.902 - 11796.480: 93.3231% ( 40) 00:27:33.031 11796.480 - 11856.058: 93.5003% ( 22) 00:27:33.031 11856.058 - 11915.636: 93.7500% ( 31) 00:27:33.031 11915.636 - 11975.215: 93.9433% ( 24) 00:27:33.031 11975.215 - 12034.793: 94.1769% ( 29) 00:27:33.031 12034.793 - 12094.371: 94.3460% ( 21) 00:27:33.031 12094.371 - 12153.949: 94.5715% ( 28) 00:27:33.031 12153.949 - 12213.527: 94.7568% ( 23) 00:27:33.031 12213.527 - 12273.105: 94.9098% ( 19) 00:27:33.031 12273.105 - 12332.684: 95.0789% ( 21) 00:27:33.031 12332.684 - 12392.262: 95.2561% ( 22) 00:27:33.031 12392.262 - 12451.840: 95.4333% ( 22) 00:27:33.031 12451.840 - 12511.418: 95.5944% ( 20) 00:27:33.031 12511.418 - 12570.996: 95.7635% ( 21) 00:27:33.031 12570.996 - 12630.575: 95.8924% ( 16) 00:27:33.031 12630.575 - 12690.153: 96.0454% ( 19) 00:27:33.031 12690.153 - 12749.731: 96.1823% ( 17) 00:27:33.031 12749.731 - 12809.309: 96.3193% ( 17) 00:27:33.031 12809.309 - 12868.887: 96.4320% ( 14) 00:27:33.031 12868.887 - 12928.465: 96.6012% ( 21) 00:27:33.031 12928.465 - 12988.044: 96.7542% ( 19) 00:27:33.031 12988.044 - 13047.622: 96.8347% ( 10) 00:27:33.031 13047.622 - 13107.200: 96.9878% ( 19) 00:27:33.031 13107.200 - 13166.778: 97.1086% ( 15) 00:27:33.031 13166.778 - 13226.356: 97.2535% ( 18) 00:27:33.031 13226.356 - 13285.935: 97.3582% ( 13) 00:27:33.031 13285.935 - 13345.513: 97.4468% ( 11) 00:27:33.031 13345.513 - 13405.091: 97.5596% ( 14) 00:27:33.031 13405.091 - 13464.669: 97.6562% ( 12) 00:27:33.031 13464.669 - 13524.247: 97.7690% ( 14) 00:27:33.031 13524.247 - 13583.825: 97.8173% ( 6) 00:27:33.031 13583.825 - 13643.404: 97.9381% ( 15) 00:27:33.031 13643.404 - 13702.982: 97.9945% ( 7) 00:27:33.031 13702.982 - 13762.560: 98.0670% ( 9) 00:27:33.031 13762.560 - 13822.138: 98.1395% ( 9) 00:27:33.031 13822.138 - 13881.716: 98.1878% ( 6) 00:27:33.031 13881.716 - 13941.295: 98.2845% ( 12) 00:27:33.031 13941.295 - 14000.873: 98.3247% ( 5) 00:27:33.031 14000.873 - 14060.451: 98.4053% ( 10) 00:27:33.031 14060.451 - 14120.029: 98.4697% ( 8) 00:27:33.031 14120.029 - 14179.607: 98.5341% ( 8) 00:27:33.031 14179.607 - 14239.185: 98.6066% ( 9) 00:27:33.031 14239.185 - 14298.764: 98.6550% ( 6) 00:27:33.031 14298.764 - 14358.342: 98.7113% ( 7) 00:27:33.031 14358.342 - 14417.920: 98.7516% ( 5) 00:27:33.031 14417.920 - 14477.498: 98.7919% ( 5) 00:27:33.031 14477.498 - 14537.076: 98.8322% ( 5) 00:27:33.031 14537.076 - 14596.655: 98.8724% ( 5) 00:27:33.031 14596.655 - 14656.233: 98.9127% ( 5) 00:27:33.031 14656.233 - 14715.811: 98.9530% ( 5) 00:27:33.031 14715.811 - 14775.389: 98.9610% ( 1) 00:27:33.031 14775.389 - 14834.967: 98.9691% ( 1) 00:27:33.031 33125.469 - 33363.782: 98.9932% ( 3) 00:27:33.031 33363.782 - 33602.095: 99.0255% ( 4) 00:27:33.031 33602.095 - 33840.407: 99.0657% ( 5) 00:27:33.031 33840.407 - 34078.720: 99.0979% ( 4) 00:27:33.031 34078.720 - 34317.033: 99.1382% ( 5) 00:27:33.031 34317.033 - 34555.345: 99.1704% ( 4) 00:27:33.031 34555.345 - 34793.658: 99.2107% ( 5) 00:27:33.031 34793.658 - 35031.971: 99.2429% ( 4) 00:27:33.031 35031.971 - 35270.284: 99.2832% ( 5) 00:27:33.031 35270.284 - 35508.596: 99.3154% ( 4) 00:27:33.031 35508.596 - 35746.909: 99.3476% ( 4) 00:27:33.031 35746.909 - 35985.222: 99.3879% ( 5) 00:27:33.031 35985.222 - 36223.535: 99.4201% ( 4) 00:27:33.031 36223.535 - 36461.847: 99.4604% ( 5) 00:27:33.031 36461.847 - 36700.160: 99.4845% ( 3) 00:27:33.031 41704.727 - 41943.040: 99.4926% ( 1) 00:27:33.031 41943.040 - 42181.353: 99.5329% ( 5) 00:27:33.031 42181.353 - 42419.665: 99.5731% ( 5) 00:27:33.031 42419.665 - 42657.978: 99.6134% ( 5) 00:27:33.031 42657.978 - 42896.291: 99.6537% ( 5) 00:27:33.031 42896.291 - 43134.604: 99.6939% ( 5) 00:27:33.031 43134.604 - 43372.916: 99.7342% ( 5) 00:27:33.031 43372.916 - 43611.229: 99.7745% ( 5) 00:27:33.031 43611.229 - 43849.542: 99.8148% ( 5) 00:27:33.031 43849.542 - 44087.855: 99.8550% ( 5) 00:27:33.031 44087.855 - 44326.167: 99.9034% ( 6) 00:27:33.031 44326.167 - 44564.480: 99.9436% ( 5) 00:27:33.031 44564.480 - 44802.793: 99.9839% ( 5) 00:27:33.031 44802.793 - 45041.105: 100.0000% ( 2) 00:27:33.031 00:27:33.031 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:27:33.031 ============================================================================== 00:27:33.031 Range in us Cumulative IO count 00:27:33.031 8162.211 - 8221.789: 0.0483% ( 6) 00:27:33.031 8221.789 - 8281.367: 0.0805% ( 4) 00:27:33.031 8281.367 - 8340.945: 0.1611% ( 10) 00:27:33.031 8340.945 - 8400.524: 0.2899% ( 16) 00:27:33.031 8400.524 - 8460.102: 0.4510% ( 20) 00:27:33.031 8460.102 - 8519.680: 0.7007% ( 31) 00:27:33.031 8519.680 - 8579.258: 0.9504% ( 31) 00:27:33.031 8579.258 - 8638.836: 1.4336% ( 60) 00:27:33.031 8638.836 - 8698.415: 2.1102% ( 84) 00:27:33.031 8698.415 - 8757.993: 2.8753% ( 95) 00:27:33.031 8757.993 - 8817.571: 3.9948% ( 139) 00:27:33.031 8817.571 - 8877.149: 5.3157% ( 164) 00:27:33.031 8877.149 - 8936.727: 7.0232% ( 212) 00:27:33.031 8936.727 - 8996.305: 8.8676% ( 229) 00:27:33.031 8996.305 - 9055.884: 10.7361% ( 232) 00:27:33.031 9055.884 - 9115.462: 12.7175% ( 246) 00:27:33.031 9115.462 - 9175.040: 14.8035% ( 259) 00:27:33.031 9175.040 - 9234.618: 17.1311% ( 289) 00:27:33.031 9234.618 - 9294.196: 19.5474% ( 300) 00:27:33.031 9294.196 - 9353.775: 22.0602% ( 312) 00:27:33.031 9353.775 - 9413.353: 24.5651% ( 311) 00:27:33.031 9413.353 - 9472.931: 27.2390% ( 332) 00:27:33.031 9472.931 - 9532.509: 30.0660% ( 351) 00:27:33.031 9532.509 - 9592.087: 32.9977% ( 364) 00:27:33.031 9592.087 - 9651.665: 35.9214% ( 363) 00:27:33.031 9651.665 - 9711.244: 38.8611% ( 365) 00:27:33.031 9711.244 - 9770.822: 41.9620% ( 385) 00:27:33.031 9770.822 - 9830.400: 45.1756% ( 399) 00:27:33.031 9830.400 - 9889.978: 48.5019% ( 413) 00:27:33.031 9889.978 - 9949.556: 51.7719% ( 406) 00:27:33.031 9949.556 - 10009.135: 54.9291% ( 392) 00:27:33.031 10009.135 - 10068.713: 57.8608% ( 364) 00:27:33.031 10068.713 - 10128.291: 60.8006% ( 365) 00:27:33.031 10128.291 - 10187.869: 63.4826% ( 333) 00:27:33.031 10187.869 - 10247.447: 66.0358% ( 317) 00:27:33.031 10247.447 - 10307.025: 68.4842% ( 304) 00:27:33.031 10307.025 - 10366.604: 70.8119% ( 289) 00:27:33.031 10366.604 - 10426.182: 73.1314% ( 288) 00:27:33.031 10426.182 - 10485.760: 75.2980% ( 269) 00:27:33.031 10485.760 - 10545.338: 77.2874% ( 247) 00:27:33.031 10545.338 - 10604.916: 79.0673% ( 221) 00:27:33.031 10604.916 - 10664.495: 80.7023% ( 203) 00:27:33.031 10664.495 - 10724.073: 82.3131% ( 200) 00:27:33.031 10724.073 - 10783.651: 83.7307% ( 176) 00:27:33.031 10783.651 - 10843.229: 85.0515% ( 164) 00:27:33.031 10843.229 - 10902.807: 86.2033% ( 143) 00:27:33.031 10902.807 - 10962.385: 87.1939% ( 123) 00:27:33.031 10962.385 - 11021.964: 88.1282% ( 116) 00:27:33.031 11021.964 - 11081.542: 88.9014% ( 96) 00:27:33.031 11081.542 - 11141.120: 89.5296% ( 78) 00:27:33.031 11141.120 - 11200.698: 90.0773% ( 68) 00:27:33.031 11200.698 - 11260.276: 90.6250% ( 68) 00:27:33.031 11260.276 - 11319.855: 91.1002% ( 59) 00:27:33.031 11319.855 - 11379.433: 91.5110% ( 51) 00:27:33.031 11379.433 - 11439.011: 91.8251% ( 39) 00:27:33.031 11439.011 - 11498.589: 92.1150% ( 36) 00:27:33.031 11498.589 - 11558.167: 92.4452% ( 41) 00:27:33.031 11558.167 - 11617.745: 92.7352% ( 36) 00:27:33.031 11617.745 - 11677.324: 93.0171% ( 35) 00:27:33.031 11677.324 - 11736.902: 93.2345% ( 27) 00:27:33.031 11736.902 - 11796.480: 93.4601% ( 28) 00:27:33.031 11796.480 - 11856.058: 93.6534% ( 24) 00:27:33.031 11856.058 - 11915.636: 93.8547% ( 25) 00:27:33.031 11915.636 - 11975.215: 94.0561% ( 25) 00:27:33.031 11975.215 - 12034.793: 94.2494% ( 24) 00:27:33.031 12034.793 - 12094.371: 94.4104% ( 20) 00:27:33.031 12094.371 - 12153.949: 94.6037% ( 24) 00:27:33.031 12153.949 - 12213.527: 94.7809% ( 22) 00:27:33.031 12213.527 - 12273.105: 94.9662% ( 23) 00:27:33.031 12273.105 - 12332.684: 95.1273% ( 20) 00:27:33.031 12332.684 - 12392.262: 95.2561% ( 16) 00:27:33.031 12392.262 - 12451.840: 95.3930% ( 17) 00:27:33.031 12451.840 - 12511.418: 95.5300% ( 17) 00:27:33.031 12511.418 - 12570.996: 95.6669% ( 17) 00:27:33.031 12570.996 - 12630.575: 95.7957% ( 16) 00:27:33.031 12630.575 - 12690.153: 95.9407% ( 18) 00:27:33.031 12690.153 - 12749.731: 96.0696% ( 16) 00:27:33.031 12749.731 - 12809.309: 96.1985% ( 16) 00:27:33.031 12809.309 - 12868.887: 96.2951% ( 12) 00:27:33.031 12868.887 - 12928.465: 96.3837% ( 11) 00:27:33.031 12928.465 - 12988.044: 96.4642% ( 10) 00:27:33.031 12988.044 - 13047.622: 96.5609% ( 12) 00:27:33.031 13047.622 - 13107.200: 96.7220% ( 20) 00:27:33.031 13107.200 - 13166.778: 96.8428% ( 15) 00:27:33.031 13166.778 - 13226.356: 96.9958% ( 19) 00:27:33.031 13226.356 - 13285.935: 97.1569% ( 20) 00:27:33.031 13285.935 - 13345.513: 97.2938% ( 17) 00:27:33.031 13345.513 - 13405.091: 97.4468% ( 19) 00:27:33.031 13405.091 - 13464.669: 97.5677% ( 15) 00:27:33.031 13464.669 - 13524.247: 97.6643% ( 12) 00:27:33.031 13524.247 - 13583.825: 97.7771% ( 14) 00:27:33.031 13583.825 - 13643.404: 97.8979% ( 15) 00:27:33.031 13643.404 - 13702.982: 98.0026% ( 13) 00:27:33.031 13702.982 - 13762.560: 98.1153% ( 14) 00:27:33.031 13762.560 - 13822.138: 98.2361% ( 15) 00:27:33.031 13822.138 - 13881.716: 98.3086% ( 9) 00:27:33.031 13881.716 - 13941.295: 98.3972% ( 11) 00:27:33.031 13941.295 - 14000.873: 98.4536% ( 7) 00:27:33.031 14000.873 - 14060.451: 98.5261% ( 9) 00:27:33.031 14060.451 - 14120.029: 98.5825% ( 7) 00:27:33.031 14120.029 - 14179.607: 98.6389% ( 7) 00:27:33.031 14179.607 - 14239.185: 98.6711% ( 4) 00:27:33.031 14239.185 - 14298.764: 98.6872% ( 2) 00:27:33.031 14298.764 - 14358.342: 98.7033% ( 2) 00:27:33.031 14358.342 - 14417.920: 98.7194% ( 2) 00:27:33.031 14417.920 - 14477.498: 98.7436% ( 3) 00:27:33.031 14477.498 - 14537.076: 98.7597% ( 2) 00:27:33.031 14537.076 - 14596.655: 98.7758% ( 2) 00:27:33.032 14596.655 - 14656.233: 98.7999% ( 3) 00:27:33.032 14656.233 - 14715.811: 98.8160% ( 2) 00:27:33.032 14715.811 - 14775.389: 98.8402% ( 3) 00:27:33.032 14775.389 - 14834.967: 98.8563% ( 2) 00:27:33.032 14834.967 - 14894.545: 98.8644% ( 1) 00:27:33.032 14894.545 - 14954.124: 98.8885% ( 3) 00:27:33.032 14954.124 - 15013.702: 98.9046% ( 2) 00:27:33.032 15013.702 - 15073.280: 98.9288% ( 3) 00:27:33.032 15073.280 - 15132.858: 98.9449% ( 2) 00:27:33.032 15132.858 - 15192.436: 98.9610% ( 2) 00:27:33.032 15192.436 - 15252.015: 98.9691% ( 1) 00:27:33.032 30742.342 - 30980.655: 98.9932% ( 3) 00:27:33.032 30980.655 - 31218.967: 99.0416% ( 6) 00:27:33.032 31218.967 - 31457.280: 99.0818% ( 5) 00:27:33.032 31457.280 - 31695.593: 99.1221% ( 5) 00:27:33.032 31695.593 - 31933.905: 99.1543% ( 4) 00:27:33.032 31933.905 - 32172.218: 99.2026% ( 6) 00:27:33.032 32172.218 - 32410.531: 99.2268% ( 3) 00:27:33.032 32410.531 - 32648.844: 99.2751% ( 6) 00:27:33.032 32648.844 - 32887.156: 99.3154% ( 5) 00:27:33.032 32887.156 - 33125.469: 99.3637% ( 6) 00:27:33.032 33125.469 - 33363.782: 99.4040% ( 5) 00:27:33.032 33363.782 - 33602.095: 99.4523% ( 6) 00:27:33.032 33602.095 - 33840.407: 99.4845% ( 4) 00:27:33.032 39083.287 - 39321.600: 99.5248% ( 5) 00:27:33.032 39321.600 - 39559.913: 99.5651% ( 5) 00:27:33.032 39559.913 - 39798.225: 99.6134% ( 6) 00:27:33.032 39798.225 - 40036.538: 99.6456% ( 4) 00:27:33.032 40036.538 - 40274.851: 99.6939% ( 6) 00:27:33.032 40274.851 - 40513.164: 99.7342% ( 5) 00:27:33.032 40513.164 - 40751.476: 99.7745% ( 5) 00:27:33.032 40751.476 - 40989.789: 99.8228% ( 6) 00:27:33.032 40989.789 - 41228.102: 99.8631% ( 5) 00:27:33.032 41228.102 - 41466.415: 99.9114% ( 6) 00:27:33.032 41466.415 - 41704.727: 99.9597% ( 6) 00:27:33.032 41704.727 - 41943.040: 100.0000% ( 5) 00:27:33.032 00:27:33.032 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:27:33.032 ============================================================================== 00:27:33.032 Range in us Cumulative IO count 00:27:33.032 8043.055 - 8102.633: 0.0081% ( 1) 00:27:33.032 8102.633 - 8162.211: 0.0483% ( 5) 00:27:33.032 8162.211 - 8221.789: 0.0644% ( 2) 00:27:33.032 8221.789 - 8281.367: 0.1047% ( 5) 00:27:33.032 8281.367 - 8340.945: 0.1691% ( 8) 00:27:33.032 8340.945 - 8400.524: 0.2658% ( 12) 00:27:33.032 8400.524 - 8460.102: 0.4188% ( 19) 00:27:33.032 8460.102 - 8519.680: 0.6765% ( 32) 00:27:33.032 8519.680 - 8579.258: 0.9021% ( 28) 00:27:33.032 8579.258 - 8638.836: 1.3450% ( 55) 00:27:33.032 8638.836 - 8698.415: 1.9572% ( 76) 00:27:33.032 8698.415 - 8757.993: 2.7062% ( 93) 00:27:33.032 8757.993 - 8817.571: 3.7854% ( 134) 00:27:33.032 8817.571 - 8877.149: 5.0580% ( 158) 00:27:33.032 8877.149 - 8936.727: 6.5722% ( 188) 00:27:33.032 8936.727 - 8996.305: 8.2313% ( 206) 00:27:33.032 8996.305 - 9055.884: 10.2610% ( 252) 00:27:33.032 9055.884 - 9115.462: 12.4436% ( 271) 00:27:33.032 9115.462 - 9175.040: 14.4733% ( 252) 00:27:33.032 9175.040 - 9234.618: 16.7848% ( 287) 00:27:33.032 9234.618 - 9294.196: 19.3299% ( 316) 00:27:33.032 9294.196 - 9353.775: 21.9072% ( 320) 00:27:33.032 9353.775 - 9413.353: 24.6134% ( 336) 00:27:33.032 9413.353 - 9472.931: 27.3599% ( 341) 00:27:33.032 9472.931 - 9532.509: 30.1707% ( 349) 00:27:33.032 9532.509 - 9592.087: 33.1024% ( 364) 00:27:33.032 9592.087 - 9651.665: 35.9375% ( 352) 00:27:33.032 9651.665 - 9711.244: 39.0464% ( 386) 00:27:33.032 9711.244 - 9770.822: 42.2358% ( 396) 00:27:33.032 9770.822 - 9830.400: 45.5380% ( 410) 00:27:33.032 9830.400 - 9889.978: 48.8080% ( 406) 00:27:33.032 9889.978 - 9949.556: 52.0457% ( 402) 00:27:33.032 9949.556 - 10009.135: 55.1385% ( 384) 00:27:33.032 10009.135 - 10068.713: 58.1024% ( 368) 00:27:33.032 10068.713 - 10128.291: 60.8328% ( 339) 00:27:33.032 10128.291 - 10187.869: 63.4101% ( 320) 00:27:33.032 10187.869 - 10247.447: 65.9391% ( 314) 00:27:33.032 10247.447 - 10307.025: 68.3795% ( 303) 00:27:33.032 10307.025 - 10366.604: 70.6427% ( 281) 00:27:33.032 10366.604 - 10426.182: 72.7771% ( 265) 00:27:33.032 10426.182 - 10485.760: 74.8872% ( 262) 00:27:33.032 10485.760 - 10545.338: 76.8444% ( 243) 00:27:33.032 10545.338 - 10604.916: 78.7371% ( 235) 00:27:33.032 10604.916 - 10664.495: 80.4365% ( 211) 00:27:33.032 10664.495 - 10724.073: 82.0715% ( 203) 00:27:33.032 10724.073 - 10783.651: 83.5454% ( 183) 00:27:33.032 10783.651 - 10843.229: 84.9871% ( 179) 00:27:33.032 10843.229 - 10902.807: 86.2516% ( 157) 00:27:33.032 10902.807 - 10962.385: 87.2423% ( 123) 00:27:33.032 10962.385 - 11021.964: 88.1202% ( 109) 00:27:33.032 11021.964 - 11081.542: 88.9014% ( 97) 00:27:33.032 11081.542 - 11141.120: 89.5377% ( 79) 00:27:33.032 11141.120 - 11200.698: 90.0612% ( 65) 00:27:33.032 11200.698 - 11260.276: 90.5606% ( 62) 00:27:33.032 11260.276 - 11319.855: 91.0358% ( 59) 00:27:33.032 11319.855 - 11379.433: 91.4948% ( 57) 00:27:33.032 11379.433 - 11439.011: 91.8653% ( 46) 00:27:33.032 11439.011 - 11498.589: 92.2036% ( 42) 00:27:33.032 11498.589 - 11558.167: 92.5580% ( 44) 00:27:33.032 11558.167 - 11617.745: 92.8721% ( 39) 00:27:33.032 11617.745 - 11677.324: 93.1862% ( 39) 00:27:33.032 11677.324 - 11736.902: 93.4037% ( 27) 00:27:33.032 11736.902 - 11796.480: 93.5809% ( 22) 00:27:33.032 11796.480 - 11856.058: 93.7500% ( 21) 00:27:33.032 11856.058 - 11915.636: 93.9272% ( 22) 00:27:33.032 11915.636 - 11975.215: 94.0963% ( 21) 00:27:33.032 11975.215 - 12034.793: 94.3057% ( 26) 00:27:33.032 12034.793 - 12094.371: 94.4910% ( 23) 00:27:33.032 12094.371 - 12153.949: 94.6682% ( 22) 00:27:33.032 12153.949 - 12213.527: 94.8454% ( 22) 00:27:33.032 12213.527 - 12273.105: 95.0064% ( 20) 00:27:33.032 12273.105 - 12332.684: 95.1595% ( 19) 00:27:33.032 12332.684 - 12392.262: 95.3206% ( 20) 00:27:33.032 12392.262 - 12451.840: 95.4977% ( 22) 00:27:33.032 12451.840 - 12511.418: 95.6749% ( 22) 00:27:33.032 12511.418 - 12570.996: 95.8360% ( 20) 00:27:33.032 12570.996 - 12630.575: 96.0132% ( 22) 00:27:33.032 12630.575 - 12690.153: 96.1743% ( 20) 00:27:33.032 12690.153 - 12749.731: 96.3354% ( 20) 00:27:33.032 12749.731 - 12809.309: 96.4965% ( 20) 00:27:33.032 12809.309 - 12868.887: 96.6817% ( 23) 00:27:33.032 12868.887 - 12928.465: 96.8186% ( 17) 00:27:33.032 12928.465 - 12988.044: 96.9394% ( 15) 00:27:33.032 12988.044 - 13047.622: 97.0683% ( 16) 00:27:33.032 13047.622 - 13107.200: 97.1811% ( 14) 00:27:33.032 13107.200 - 13166.778: 97.2938% ( 14) 00:27:33.032 13166.778 - 13226.356: 97.4549% ( 20) 00:27:33.032 13226.356 - 13285.935: 97.5999% ( 18) 00:27:33.032 13285.935 - 13345.513: 97.7368% ( 17) 00:27:33.032 13345.513 - 13405.091: 97.8093% ( 9) 00:27:33.032 13405.091 - 13464.669: 97.9301% ( 15) 00:27:33.032 13464.669 - 13524.247: 98.0348% ( 13) 00:27:33.032 13524.247 - 13583.825: 98.0831% ( 6) 00:27:33.032 13583.825 - 13643.404: 98.1153% ( 4) 00:27:33.032 13643.404 - 13702.982: 98.1556% ( 5) 00:27:33.032 13702.982 - 13762.560: 98.1798% ( 3) 00:27:33.032 13762.560 - 13822.138: 98.2120% ( 4) 00:27:33.032 13822.138 - 13881.716: 98.2603% ( 6) 00:27:33.032 13881.716 - 13941.295: 98.3006% ( 5) 00:27:33.032 13941.295 - 14000.873: 98.3409% ( 5) 00:27:33.032 14000.873 - 14060.451: 98.3811% ( 5) 00:27:33.032 14060.451 - 14120.029: 98.4214% ( 5) 00:27:33.032 14120.029 - 14179.607: 98.4697% ( 6) 00:27:33.032 14179.607 - 14239.185: 98.5180% ( 6) 00:27:33.032 14239.185 - 14298.764: 98.5744% ( 7) 00:27:33.032 14298.764 - 14358.342: 98.6227% ( 6) 00:27:33.032 14358.342 - 14417.920: 98.6711% ( 6) 00:27:33.032 14417.920 - 14477.498: 98.6952% ( 3) 00:27:33.032 14477.498 - 14537.076: 98.7194% ( 3) 00:27:33.032 14537.076 - 14596.655: 98.7274% ( 1) 00:27:33.032 14596.655 - 14656.233: 98.7597% ( 4) 00:27:33.032 14656.233 - 14715.811: 98.7838% ( 3) 00:27:33.032 14715.811 - 14775.389: 98.8080% ( 3) 00:27:33.032 14775.389 - 14834.967: 98.8402% ( 4) 00:27:33.032 14834.967 - 14894.545: 98.8644% ( 3) 00:27:33.032 14894.545 - 14954.124: 98.8885% ( 3) 00:27:33.032 14954.124 - 15013.702: 98.9127% ( 3) 00:27:33.032 15013.702 - 15073.280: 98.9449% ( 4) 00:27:33.032 15073.280 - 15132.858: 98.9691% ( 3) 00:27:33.032 28001.745 - 28120.902: 98.9771% ( 1) 00:27:33.032 28120.902 - 28240.058: 98.9932% ( 2) 00:27:33.032 28240.058 - 28359.215: 99.0093% ( 2) 00:27:33.032 28359.215 - 28478.371: 99.0255% ( 2) 00:27:33.032 28478.371 - 28597.527: 99.0496% ( 3) 00:27:33.032 28597.527 - 28716.684: 99.0738% ( 3) 00:27:33.032 28716.684 - 28835.840: 99.0979% ( 3) 00:27:33.032 28835.840 - 28954.996: 99.1140% ( 2) 00:27:33.032 28954.996 - 29074.153: 99.1382% ( 3) 00:27:33.032 29074.153 - 29193.309: 99.1624% ( 3) 00:27:33.032 29193.309 - 29312.465: 99.1785% ( 2) 00:27:33.032 29312.465 - 29431.622: 99.2026% ( 3) 00:27:33.032 29431.622 - 29550.778: 99.2188% ( 2) 00:27:33.032 29550.778 - 29669.935: 99.2349% ( 2) 00:27:33.032 29669.935 - 29789.091: 99.2590% ( 3) 00:27:33.032 29789.091 - 29908.247: 99.2832% ( 3) 00:27:33.032 29908.247 - 30027.404: 99.3073% ( 3) 00:27:33.032 30027.404 - 30146.560: 99.3235% ( 2) 00:27:33.032 30146.560 - 30265.716: 99.3476% ( 3) 00:27:33.032 30265.716 - 30384.873: 99.3718% ( 3) 00:27:33.032 30384.873 - 30504.029: 99.3959% ( 3) 00:27:33.032 30504.029 - 30742.342: 99.4362% ( 5) 00:27:33.032 30742.342 - 30980.655: 99.4765% ( 5) 00:27:33.032 30980.655 - 31218.967: 99.4845% ( 1) 00:27:33.032 36461.847 - 36700.160: 99.5248% ( 5) 00:27:33.032 36700.160 - 36938.473: 99.5731% ( 6) 00:27:33.032 36938.473 - 37176.785: 99.6134% ( 5) 00:27:33.032 37176.785 - 37415.098: 99.6456% ( 4) 00:27:33.032 37415.098 - 37653.411: 99.6939% ( 6) 00:27:33.032 37653.411 - 37891.724: 99.7342% ( 5) 00:27:33.032 37891.724 - 38130.036: 99.7745% ( 5) 00:27:33.032 38130.036 - 38368.349: 99.8148% ( 5) 00:27:33.032 38368.349 - 38606.662: 99.8550% ( 5) 00:27:33.032 38606.662 - 38844.975: 99.9034% ( 6) 00:27:33.032 38844.975 - 39083.287: 99.9436% ( 5) 00:27:33.032 39083.287 - 39321.600: 99.9839% ( 5) 00:27:33.032 39321.600 - 39559.913: 100.0000% ( 2) 00:27:33.032 00:27:33.032 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:27:33.032 ============================================================================== 00:27:33.032 Range in us Cumulative IO count 00:27:33.032 8102.633 - 8162.211: 0.0081% ( 1) 00:27:33.032 8162.211 - 8221.789: 0.0403% ( 4) 00:27:33.032 8221.789 - 8281.367: 0.0966% ( 7) 00:27:33.032 8281.367 - 8340.945: 0.1772% ( 10) 00:27:33.032 8340.945 - 8400.524: 0.2899% ( 14) 00:27:33.032 8400.524 - 8460.102: 0.4671% ( 22) 00:27:33.032 8460.102 - 8519.680: 0.6765% ( 26) 00:27:33.032 8519.680 - 8579.258: 0.9021% ( 28) 00:27:33.032 8579.258 - 8638.836: 1.2001% ( 37) 00:27:33.032 8638.836 - 8698.415: 1.7236% ( 65) 00:27:33.032 8698.415 - 8757.993: 2.6256% ( 112) 00:27:33.032 8757.993 - 8817.571: 3.7049% ( 134) 00:27:33.032 8817.571 - 8877.149: 5.0419% ( 166) 00:27:33.032 8877.149 - 8936.727: 6.5399% ( 186) 00:27:33.032 8936.727 - 8996.305: 8.3360% ( 223) 00:27:33.032 8996.305 - 9055.884: 10.2287% ( 235) 00:27:33.032 9055.884 - 9115.462: 12.2584% ( 252) 00:27:33.032 9115.462 - 9175.040: 14.5377% ( 283) 00:27:33.032 9175.040 - 9234.618: 16.7526% ( 275) 00:27:33.032 9234.618 - 9294.196: 19.2171% ( 306) 00:27:33.032 9294.196 - 9353.775: 21.7622% ( 316) 00:27:33.032 9353.775 - 9413.353: 24.3879% ( 326) 00:27:33.032 9413.353 - 9472.931: 27.1505% ( 343) 00:27:33.032 9472.931 - 9532.509: 29.8727% ( 338) 00:27:33.032 9532.509 - 9592.087: 32.7722% ( 360) 00:27:33.032 9592.087 - 9651.665: 35.8650% ( 384) 00:27:33.032 9651.665 - 9711.244: 38.9095% ( 378) 00:27:33.032 9711.244 - 9770.822: 42.1150% ( 398) 00:27:33.032 9770.822 - 9830.400: 45.4253% ( 411) 00:27:33.033 9830.400 - 9889.978: 48.6308% ( 398) 00:27:33.033 9889.978 - 9949.556: 51.8202% ( 396) 00:27:33.033 9949.556 - 10009.135: 54.8244% ( 373) 00:27:33.033 10009.135 - 10068.713: 57.7803% ( 367) 00:27:33.033 10068.713 - 10128.291: 60.4945% ( 337) 00:27:33.033 10128.291 - 10187.869: 63.1282% ( 327) 00:27:33.033 10187.869 - 10247.447: 65.7297% ( 323) 00:27:33.033 10247.447 - 10307.025: 68.1459% ( 300) 00:27:33.033 10307.025 - 10366.604: 70.5541% ( 299) 00:27:33.033 10366.604 - 10426.182: 72.8012% ( 279) 00:27:33.033 10426.182 - 10485.760: 74.9034% ( 261) 00:27:33.033 10485.760 - 10545.338: 76.9008% ( 248) 00:27:33.033 10545.338 - 10604.916: 78.6727% ( 220) 00:27:33.033 10604.916 - 10664.495: 80.2996% ( 202) 00:27:33.033 10664.495 - 10724.073: 81.9185% ( 201) 00:27:33.033 10724.073 - 10783.651: 83.4327% ( 188) 00:27:33.033 10783.651 - 10843.229: 84.6247% ( 148) 00:27:33.033 10843.229 - 10902.807: 85.8006% ( 146) 00:27:33.033 10902.807 - 10962.385: 86.8798% ( 134) 00:27:33.033 10962.385 - 11021.964: 87.8061% ( 115) 00:27:33.033 11021.964 - 11081.542: 88.6356% ( 103) 00:27:33.033 11081.542 - 11141.120: 89.3202% ( 85) 00:27:33.033 11141.120 - 11200.698: 89.9646% ( 80) 00:27:33.033 11200.698 - 11260.276: 90.4639% ( 62) 00:27:33.033 11260.276 - 11319.855: 91.0116% ( 68) 00:27:33.033 11319.855 - 11379.433: 91.4465% ( 54) 00:27:33.033 11379.433 - 11439.011: 91.8492% ( 50) 00:27:33.033 11439.011 - 11498.589: 92.1875% ( 42) 00:27:33.033 11498.589 - 11558.167: 92.4936% ( 38) 00:27:33.033 11558.167 - 11617.745: 92.8238% ( 41) 00:27:33.033 11617.745 - 11677.324: 93.1137% ( 36) 00:27:33.033 11677.324 - 11736.902: 93.3473% ( 29) 00:27:33.033 11736.902 - 11796.480: 93.5889% ( 30) 00:27:33.033 11796.480 - 11856.058: 93.8386% ( 31) 00:27:33.033 11856.058 - 11915.636: 94.0802% ( 30) 00:27:33.033 11915.636 - 11975.215: 94.2655% ( 23) 00:27:33.033 11975.215 - 12034.793: 94.4668% ( 25) 00:27:33.033 12034.793 - 12094.371: 94.6682% ( 25) 00:27:33.033 12094.371 - 12153.949: 94.8534% ( 23) 00:27:33.033 12153.949 - 12213.527: 95.0306% ( 22) 00:27:33.033 12213.527 - 12273.105: 95.2159% ( 23) 00:27:33.033 12273.105 - 12332.684: 95.3769% ( 20) 00:27:33.033 12332.684 - 12392.262: 95.5219% ( 18) 00:27:33.033 12392.262 - 12451.840: 95.6910% ( 21) 00:27:33.033 12451.840 - 12511.418: 95.8360% ( 18) 00:27:33.033 12511.418 - 12570.996: 95.9971% ( 20) 00:27:33.033 12570.996 - 12630.575: 96.1662% ( 21) 00:27:33.033 12630.575 - 12690.153: 96.3273% ( 20) 00:27:33.033 12690.153 - 12749.731: 96.4803% ( 19) 00:27:33.033 12749.731 - 12809.309: 96.6575% ( 22) 00:27:33.033 12809.309 - 12868.887: 96.8025% ( 18) 00:27:33.033 12868.887 - 12928.465: 96.9555% ( 19) 00:27:33.033 12928.465 - 12988.044: 97.1005% ( 18) 00:27:33.033 12988.044 - 13047.622: 97.2294% ( 16) 00:27:33.033 13047.622 - 13107.200: 97.3421% ( 14) 00:27:33.033 13107.200 - 13166.778: 97.4710% ( 16) 00:27:33.033 13166.778 - 13226.356: 97.5999% ( 16) 00:27:33.033 13226.356 - 13285.935: 97.7046% ( 13) 00:27:33.033 13285.935 - 13345.513: 97.7851% ( 10) 00:27:33.033 13345.513 - 13405.091: 97.8657% ( 10) 00:27:33.033 13405.091 - 13464.669: 97.9381% ( 9) 00:27:33.033 13464.669 - 13524.247: 97.9865% ( 6) 00:27:33.033 13524.247 - 13583.825: 98.0267% ( 5) 00:27:33.033 13583.825 - 13643.404: 98.0751% ( 6) 00:27:33.033 13643.404 - 13702.982: 98.1234% ( 6) 00:27:33.033 13702.982 - 13762.560: 98.1637% ( 5) 00:27:33.033 13762.560 - 13822.138: 98.2120% ( 6) 00:27:33.033 13822.138 - 13881.716: 98.2523% ( 5) 00:27:33.033 13881.716 - 13941.295: 98.3086% ( 7) 00:27:33.033 13941.295 - 14000.873: 98.3731% ( 8) 00:27:33.033 14000.873 - 14060.451: 98.4214% ( 6) 00:27:33.033 14060.451 - 14120.029: 98.4697% ( 6) 00:27:33.033 14120.029 - 14179.607: 98.5180% ( 6) 00:27:33.033 14179.607 - 14239.185: 98.5583% ( 5) 00:27:33.033 14239.185 - 14298.764: 98.6066% ( 6) 00:27:33.033 14298.764 - 14358.342: 98.6550% ( 6) 00:27:33.033 14358.342 - 14417.920: 98.6872% ( 4) 00:27:33.033 14417.920 - 14477.498: 98.7113% ( 3) 00:27:33.033 14477.498 - 14537.076: 98.7355% ( 3) 00:27:33.033 14537.076 - 14596.655: 98.7597% ( 3) 00:27:33.033 14596.655 - 14656.233: 98.7838% ( 3) 00:27:33.033 14656.233 - 14715.811: 98.8080% ( 3) 00:27:33.033 14715.811 - 14775.389: 98.8402% ( 4) 00:27:33.033 14775.389 - 14834.967: 98.8644% ( 3) 00:27:33.033 14834.967 - 14894.545: 98.8885% ( 3) 00:27:33.033 14894.545 - 14954.124: 98.9127% ( 3) 00:27:33.033 14954.124 - 15013.702: 98.9369% ( 3) 00:27:33.033 15013.702 - 15073.280: 98.9610% ( 3) 00:27:33.033 15073.280 - 15132.858: 98.9691% ( 1) 00:27:33.033 24903.680 - 25022.836: 98.9771% ( 1) 00:27:33.033 25022.836 - 25141.993: 98.9932% ( 2) 00:27:33.033 25141.993 - 25261.149: 99.0093% ( 2) 00:27:33.033 25261.149 - 25380.305: 99.0255% ( 2) 00:27:33.033 25380.305 - 25499.462: 99.0496% ( 3) 00:27:33.033 25499.462 - 25618.618: 99.0738% ( 3) 00:27:33.033 25618.618 - 25737.775: 99.0979% ( 3) 00:27:33.033 25737.775 - 25856.931: 99.1140% ( 2) 00:27:33.033 25856.931 - 25976.087: 99.1382% ( 3) 00:27:33.033 25976.087 - 26095.244: 99.1543% ( 2) 00:27:33.033 26095.244 - 26214.400: 99.1785% ( 3) 00:27:33.033 26214.400 - 26333.556: 99.2026% ( 3) 00:27:33.033 26333.556 - 26452.713: 99.2188% ( 2) 00:27:33.033 26452.713 - 26571.869: 99.2429% ( 3) 00:27:33.033 26571.869 - 26691.025: 99.2590% ( 2) 00:27:33.033 26691.025 - 26810.182: 99.2832% ( 3) 00:27:33.033 26810.182 - 26929.338: 99.3073% ( 3) 00:27:33.033 26929.338 - 27048.495: 99.3235% ( 2) 00:27:33.033 27048.495 - 27167.651: 99.3476% ( 3) 00:27:33.033 27167.651 - 27286.807: 99.3718% ( 3) 00:27:33.033 27286.807 - 27405.964: 99.3959% ( 3) 00:27:33.033 27405.964 - 27525.120: 99.4120% ( 2) 00:27:33.033 27525.120 - 27644.276: 99.4362% ( 3) 00:27:33.033 27644.276 - 27763.433: 99.4523% ( 2) 00:27:33.033 27763.433 - 27882.589: 99.4765% ( 3) 00:27:33.033 27882.589 - 28001.745: 99.4845% ( 1) 00:27:33.033 33363.782 - 33602.095: 99.5168% ( 4) 00:27:33.033 33602.095 - 33840.407: 99.5570% ( 5) 00:27:33.033 33840.407 - 34078.720: 99.6053% ( 6) 00:27:33.033 34078.720 - 34317.033: 99.6456% ( 5) 00:27:33.033 34317.033 - 34555.345: 99.6859% ( 5) 00:27:33.033 34555.345 - 34793.658: 99.7262% ( 5) 00:27:33.033 34793.658 - 35031.971: 99.7664% ( 5) 00:27:33.033 35031.971 - 35270.284: 99.8067% ( 5) 00:27:33.033 35270.284 - 35508.596: 99.8550% ( 6) 00:27:33.033 35508.596 - 35746.909: 99.8953% ( 5) 00:27:33.033 35746.909 - 35985.222: 99.9436% ( 6) 00:27:33.033 35985.222 - 36223.535: 99.9839% ( 5) 00:27:33.033 36223.535 - 36461.847: 100.0000% ( 2) 00:27:33.033 00:27:33.033 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:27:33.033 ============================================================================== 00:27:33.033 Range in us Cumulative IO count 00:27:33.033 8221.789 - 8281.367: 0.0322% ( 4) 00:27:33.033 8281.367 - 8340.945: 0.1369% ( 13) 00:27:33.033 8340.945 - 8400.524: 0.2980% ( 20) 00:27:33.033 8400.524 - 8460.102: 0.4591% ( 20) 00:27:33.033 8460.102 - 8519.680: 0.6282% ( 21) 00:27:33.033 8519.680 - 8579.258: 0.9262% ( 37) 00:27:33.033 8579.258 - 8638.836: 1.2726% ( 43) 00:27:33.033 8638.836 - 8698.415: 1.8686% ( 74) 00:27:33.033 8698.415 - 8757.993: 2.5854% ( 89) 00:27:33.033 8757.993 - 8817.571: 3.6646% ( 134) 00:27:33.033 8817.571 - 8877.149: 5.1869% ( 189) 00:27:33.033 8877.149 - 8936.727: 6.6608% ( 183) 00:27:33.033 8936.727 - 8996.305: 8.3038% ( 204) 00:27:33.033 8996.305 - 9055.884: 10.2448% ( 241) 00:27:33.033 9055.884 - 9115.462: 12.2262% ( 246) 00:27:33.033 9115.462 - 9175.040: 14.4974% ( 282) 00:27:33.033 9175.040 - 9234.618: 16.8573% ( 293) 00:27:33.033 9234.618 - 9294.196: 19.4024% ( 316) 00:27:33.033 9294.196 - 9353.775: 22.0200% ( 325) 00:27:33.033 9353.775 - 9413.353: 24.6053% ( 321) 00:27:33.033 9413.353 - 9472.931: 27.2874% ( 333) 00:27:33.033 9472.931 - 9532.509: 30.1707% ( 358) 00:27:33.033 9532.509 - 9592.087: 33.0300% ( 355) 00:27:33.033 9592.087 - 9651.665: 36.0583% ( 376) 00:27:33.033 9651.665 - 9711.244: 39.1269% ( 381) 00:27:33.033 9711.244 - 9770.822: 42.2519% ( 388) 00:27:33.033 9770.822 - 9830.400: 45.4172% ( 393) 00:27:33.033 9830.400 - 9889.978: 48.6308% ( 399) 00:27:33.033 9889.978 - 9949.556: 51.7316% ( 385) 00:27:33.033 9949.556 - 10009.135: 54.8164% ( 383) 00:27:33.033 10009.135 - 10068.713: 57.6756% ( 355) 00:27:33.033 10068.713 - 10128.291: 60.4301% ( 342) 00:27:33.033 10128.291 - 10187.869: 63.1443% ( 337) 00:27:33.033 10187.869 - 10247.447: 65.6492% ( 311) 00:27:33.033 10247.447 - 10307.025: 68.1540% ( 311) 00:27:33.033 10307.025 - 10366.604: 70.4575% ( 286) 00:27:33.033 10366.604 - 10426.182: 72.7287% ( 282) 00:27:33.033 10426.182 - 10485.760: 74.8067% ( 258) 00:27:33.033 10485.760 - 10545.338: 76.7477% ( 241) 00:27:33.033 10545.338 - 10604.916: 78.5921% ( 229) 00:27:33.033 10604.916 - 10664.495: 80.2755% ( 209) 00:27:33.033 10664.495 - 10724.073: 81.8218% ( 192) 00:27:33.033 10724.073 - 10783.651: 83.1508% ( 165) 00:27:33.033 10783.651 - 10843.229: 84.2945% ( 142) 00:27:33.033 10843.229 - 10902.807: 85.4140% ( 139) 00:27:33.033 10902.807 - 10962.385: 86.4932% ( 134) 00:27:33.033 10962.385 - 11021.964: 87.3711% ( 109) 00:27:33.033 11021.964 - 11081.542: 88.1443% ( 96) 00:27:33.033 11081.542 - 11141.120: 88.8692% ( 90) 00:27:33.033 11141.120 - 11200.698: 89.5135% ( 80) 00:27:33.033 11200.698 - 11260.276: 90.1579% ( 80) 00:27:33.033 11260.276 - 11319.855: 90.6250% ( 58) 00:27:33.033 11319.855 - 11379.433: 91.1002% ( 59) 00:27:33.033 11379.433 - 11439.011: 91.5110% ( 51) 00:27:33.033 11439.011 - 11498.589: 91.9298% ( 52) 00:27:33.033 11498.589 - 11558.167: 92.3405% ( 51) 00:27:33.033 11558.167 - 11617.745: 92.7513% ( 51) 00:27:33.033 11617.745 - 11677.324: 93.0815% ( 41) 00:27:33.033 11677.324 - 11736.902: 93.2990% ( 27) 00:27:33.033 11736.902 - 11796.480: 93.5406% ( 30) 00:27:33.033 11796.480 - 11856.058: 93.7178% ( 22) 00:27:33.033 11856.058 - 11915.636: 93.9433% ( 28) 00:27:33.033 11915.636 - 11975.215: 94.1769% ( 29) 00:27:33.033 11975.215 - 12034.793: 94.4104% ( 29) 00:27:33.033 12034.793 - 12094.371: 94.6521% ( 30) 00:27:33.033 12094.371 - 12153.949: 94.8615% ( 26) 00:27:33.033 12153.949 - 12213.527: 95.0789% ( 27) 00:27:33.033 12213.527 - 12273.105: 95.2883% ( 26) 00:27:33.033 12273.105 - 12332.684: 95.4897% ( 25) 00:27:33.033 12332.684 - 12392.262: 95.6669% ( 22) 00:27:33.033 12392.262 - 12451.840: 95.8360% ( 21) 00:27:33.033 12451.840 - 12511.418: 95.9971% ( 20) 00:27:33.033 12511.418 - 12570.996: 96.1662% ( 21) 00:27:33.033 12570.996 - 12630.575: 96.2790% ( 14) 00:27:33.033 12630.575 - 12690.153: 96.3998% ( 15) 00:27:33.033 12690.153 - 12749.731: 96.5287% ( 16) 00:27:33.033 12749.731 - 12809.309: 96.6978% ( 21) 00:27:33.033 12809.309 - 12868.887: 96.8267% ( 16) 00:27:33.033 12868.887 - 12928.465: 96.9555% ( 16) 00:27:33.033 12928.465 - 12988.044: 97.0844% ( 16) 00:27:33.033 12988.044 - 13047.622: 97.1891% ( 13) 00:27:33.033 13047.622 - 13107.200: 97.2938% ( 13) 00:27:33.033 13107.200 - 13166.778: 97.3663% ( 9) 00:27:33.033 13166.778 - 13226.356: 97.4307% ( 8) 00:27:33.033 13226.356 - 13285.935: 97.5274% ( 12) 00:27:33.033 13285.935 - 13345.513: 97.6160% ( 11) 00:27:33.033 13345.513 - 13405.091: 97.6804% ( 8) 00:27:33.033 13405.091 - 13464.669: 97.7529% ( 9) 00:27:33.033 13464.669 - 13524.247: 97.8415% ( 11) 00:27:33.033 13524.247 - 13583.825: 97.9301% ( 11) 00:27:33.033 13583.825 - 13643.404: 97.9945% ( 8) 00:27:33.033 13643.404 - 13702.982: 98.0670% ( 9) 00:27:33.033 13702.982 - 13762.560: 98.1234% ( 7) 00:27:33.033 13762.560 - 13822.138: 98.1878% ( 8) 00:27:33.033 13822.138 - 13881.716: 98.2442% ( 7) 00:27:33.033 13881.716 - 13941.295: 98.2925% ( 6) 00:27:33.033 13941.295 - 14000.873: 98.3409% ( 6) 00:27:33.033 14000.873 - 14060.451: 98.3972% ( 7) 00:27:33.033 14060.451 - 14120.029: 98.4617% ( 8) 00:27:33.033 14120.029 - 14179.607: 98.5100% ( 6) 00:27:33.033 14179.607 - 14239.185: 98.5664% ( 7) 00:27:33.033 14239.185 - 14298.764: 98.6227% ( 7) 00:27:33.033 14298.764 - 14358.342: 98.6711% ( 6) 00:27:33.034 14358.342 - 14417.920: 98.7274% ( 7) 00:27:33.034 14417.920 - 14477.498: 98.7838% ( 7) 00:27:33.034 14477.498 - 14537.076: 98.8160% ( 4) 00:27:33.034 14537.076 - 14596.655: 98.8563% ( 5) 00:27:33.034 14596.655 - 14656.233: 98.8724% ( 2) 00:27:33.034 14656.233 - 14715.811: 98.8885% ( 2) 00:27:33.034 14715.811 - 14775.389: 98.9127% ( 3) 00:27:33.034 14775.389 - 14834.967: 98.9207% ( 1) 00:27:33.034 14834.967 - 14894.545: 98.9369% ( 2) 00:27:33.034 14894.545 - 14954.124: 98.9530% ( 2) 00:27:33.034 14954.124 - 15013.702: 98.9610% ( 1) 00:27:33.034 15013.702 - 15073.280: 98.9691% ( 1) 00:27:33.034 21686.458 - 21805.615: 98.9852% ( 2) 00:27:33.034 21805.615 - 21924.771: 99.0013% ( 2) 00:27:33.034 21924.771 - 22043.927: 99.0255% ( 3) 00:27:33.034 22043.927 - 22163.084: 99.0496% ( 3) 00:27:33.034 22163.084 - 22282.240: 99.0738% ( 3) 00:27:33.034 22282.240 - 22401.396: 99.0979% ( 3) 00:27:33.034 22401.396 - 22520.553: 99.1140% ( 2) 00:27:33.034 22520.553 - 22639.709: 99.1382% ( 3) 00:27:33.034 22639.709 - 22758.865: 99.1624% ( 3) 00:27:33.034 22758.865 - 22878.022: 99.1785% ( 2) 00:27:33.034 22878.022 - 22997.178: 99.2026% ( 3) 00:27:33.034 22997.178 - 23116.335: 99.2268% ( 3) 00:27:33.034 23116.335 - 23235.491: 99.2429% ( 2) 00:27:33.034 23235.491 - 23354.647: 99.2590% ( 2) 00:27:33.034 23354.647 - 23473.804: 99.2832% ( 3) 00:27:33.034 23473.804 - 23592.960: 99.3073% ( 3) 00:27:33.034 23592.960 - 23712.116: 99.3235% ( 2) 00:27:33.034 23712.116 - 23831.273: 99.3476% ( 3) 00:27:33.034 23831.273 - 23950.429: 99.3637% ( 2) 00:27:33.034 23950.429 - 24069.585: 99.3879% ( 3) 00:27:33.034 24069.585 - 24188.742: 99.4040% ( 2) 00:27:33.034 24188.742 - 24307.898: 99.4282% ( 3) 00:27:33.034 24307.898 - 24427.055: 99.4523% ( 3) 00:27:33.034 24427.055 - 24546.211: 99.4765% ( 3) 00:27:33.034 24546.211 - 24665.367: 99.4845% ( 1) 00:27:33.034 30146.560 - 30265.716: 99.5006% ( 2) 00:27:33.034 30265.716 - 30384.873: 99.5248% ( 3) 00:27:33.034 30504.029 - 30742.342: 99.5731% ( 6) 00:27:33.034 30742.342 - 30980.655: 99.6053% ( 4) 00:27:33.034 30980.655 - 31218.967: 99.6376% ( 4) 00:27:33.034 31218.967 - 31457.280: 99.6859% ( 6) 00:27:33.034 31457.280 - 31695.593: 99.7262% ( 5) 00:27:33.034 31695.593 - 31933.905: 99.7664% ( 5) 00:27:33.034 31933.905 - 32172.218: 99.8067% ( 5) 00:27:33.034 32172.218 - 32410.531: 99.8470% ( 5) 00:27:33.034 32410.531 - 32648.844: 99.8953% ( 6) 00:27:33.034 32648.844 - 32887.156: 99.9356% ( 5) 00:27:33.034 32887.156 - 33125.469: 99.9839% ( 6) 00:27:33.034 33125.469 - 33363.782: 100.0000% ( 2) 00:27:33.034 00:27:33.034 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:27:33.034 ============================================================================== 00:27:33.034 Range in us Cumulative IO count 00:27:33.034 8162.211 - 8221.789: 0.0403% ( 5) 00:27:33.034 8221.789 - 8281.367: 0.0644% ( 3) 00:27:33.034 8281.367 - 8340.945: 0.1530% ( 11) 00:27:33.034 8340.945 - 8400.524: 0.2577% ( 13) 00:27:33.034 8400.524 - 8460.102: 0.4269% ( 21) 00:27:33.034 8460.102 - 8519.680: 0.6282% ( 25) 00:27:33.034 8519.680 - 8579.258: 0.8940% ( 33) 00:27:33.034 8579.258 - 8638.836: 1.3048% ( 51) 00:27:33.034 8638.836 - 8698.415: 1.8283% ( 65) 00:27:33.034 8698.415 - 8757.993: 2.6740% ( 105) 00:27:33.034 8757.993 - 8817.571: 3.7210% ( 130) 00:27:33.034 8817.571 - 8877.149: 5.1707% ( 180) 00:27:33.034 8877.149 - 8936.727: 6.7171% ( 192) 00:27:33.034 8936.727 - 8996.305: 8.5293% ( 225) 00:27:33.034 8996.305 - 9055.884: 10.3898% ( 231) 00:27:33.034 9055.884 - 9115.462: 12.5081% ( 263) 00:27:33.034 9115.462 - 9175.040: 14.6827% ( 270) 00:27:33.034 9175.040 - 9234.618: 16.9781% ( 285) 00:27:33.034 9234.618 - 9294.196: 19.4668% ( 309) 00:27:33.034 9294.196 - 9353.775: 21.9475% ( 308) 00:27:33.034 9353.775 - 9413.353: 24.5248% ( 320) 00:27:33.034 9413.353 - 9472.931: 27.3518% ( 351) 00:27:33.034 9472.931 - 9532.509: 30.3318% ( 370) 00:27:33.034 9532.509 - 9592.087: 33.4568% ( 388) 00:27:33.034 9592.087 - 9651.665: 36.3563% ( 360) 00:27:33.034 9651.665 - 9711.244: 39.3444% ( 371) 00:27:33.034 9711.244 - 9770.822: 42.5983% ( 404) 00:27:33.034 9770.822 - 9830.400: 45.7796% ( 395) 00:27:33.034 9830.400 - 9889.978: 48.9610% ( 395) 00:27:33.034 9889.978 - 9949.556: 52.1263% ( 393) 00:27:33.034 9949.556 - 10009.135: 55.2110% ( 383) 00:27:33.034 10009.135 - 10068.713: 57.9575% ( 341) 00:27:33.034 10068.713 - 10128.291: 60.7039% ( 341) 00:27:33.034 10128.291 - 10187.869: 63.1846% ( 308) 00:27:33.034 10187.869 - 10247.447: 65.6572% ( 307) 00:27:33.034 10247.447 - 10307.025: 68.1540% ( 310) 00:27:33.034 10307.025 - 10366.604: 70.5541% ( 298) 00:27:33.034 10366.604 - 10426.182: 72.7529% ( 273) 00:27:33.034 10426.182 - 10485.760: 74.7906% ( 253) 00:27:33.034 10485.760 - 10545.338: 76.7397% ( 242) 00:27:33.034 10545.338 - 10604.916: 78.6244% ( 234) 00:27:33.034 10604.916 - 10664.495: 80.3399% ( 213) 00:27:33.034 10664.495 - 10724.073: 81.8782% ( 191) 00:27:33.034 10724.073 - 10783.651: 83.1991% ( 164) 00:27:33.034 10783.651 - 10843.229: 84.3669% ( 145) 00:27:33.034 10843.229 - 10902.807: 85.5187% ( 143) 00:27:33.034 10902.807 - 10962.385: 86.6060% ( 135) 00:27:33.034 10962.385 - 11021.964: 87.5403% ( 116) 00:27:33.034 11021.964 - 11081.542: 88.3860% ( 105) 00:27:33.034 11081.542 - 11141.120: 89.1108% ( 90) 00:27:33.034 11141.120 - 11200.698: 89.7310% ( 77) 00:27:33.034 11200.698 - 11260.276: 90.2867% ( 69) 00:27:33.034 11260.276 - 11319.855: 90.8264% ( 67) 00:27:33.034 11319.855 - 11379.433: 91.2693% ( 55) 00:27:33.034 11379.433 - 11439.011: 91.6881% ( 52) 00:27:33.034 11439.011 - 11498.589: 92.0747% ( 48) 00:27:33.034 11498.589 - 11558.167: 92.4694% ( 49) 00:27:33.034 11558.167 - 11617.745: 92.8238% ( 44) 00:27:33.034 11617.745 - 11677.324: 93.1298% ( 38) 00:27:33.034 11677.324 - 11736.902: 93.3876% ( 32) 00:27:33.034 11736.902 - 11796.480: 93.5648% ( 22) 00:27:33.034 11796.480 - 11856.058: 93.7178% ( 19) 00:27:33.034 11856.058 - 11915.636: 93.8305% ( 14) 00:27:33.034 11915.636 - 11975.215: 93.9755% ( 18) 00:27:33.034 11975.215 - 12034.793: 94.1608% ( 23) 00:27:33.034 12034.793 - 12094.371: 94.3218% ( 20) 00:27:33.034 12094.371 - 12153.949: 94.4990% ( 22) 00:27:33.034 12153.949 - 12213.527: 94.7004% ( 25) 00:27:33.034 12213.527 - 12273.105: 94.8856% ( 23) 00:27:33.034 12273.105 - 12332.684: 95.1111% ( 28) 00:27:33.034 12332.684 - 12392.262: 95.2964% ( 23) 00:27:33.034 12392.262 - 12451.840: 95.4494% ( 19) 00:27:33.034 12451.840 - 12511.418: 95.6186% ( 21) 00:27:33.034 12511.418 - 12570.996: 95.8038% ( 23) 00:27:33.034 12570.996 - 12630.575: 95.9810% ( 22) 00:27:33.034 12630.575 - 12690.153: 96.1421% ( 20) 00:27:33.034 12690.153 - 12749.731: 96.3112% ( 21) 00:27:33.034 12749.731 - 12809.309: 96.4562% ( 18) 00:27:33.034 12809.309 - 12868.887: 96.5770% ( 15) 00:27:33.034 12868.887 - 12928.465: 96.6736% ( 12) 00:27:33.034 12928.465 - 12988.044: 96.7864% ( 14) 00:27:33.034 12988.044 - 13047.622: 96.8992% ( 14) 00:27:33.034 13047.622 - 13107.200: 96.9878% ( 11) 00:27:33.034 13107.200 - 13166.778: 97.0925% ( 13) 00:27:33.034 13166.778 - 13226.356: 97.1730% ( 10) 00:27:33.034 13226.356 - 13285.935: 97.2294% ( 7) 00:27:33.034 13285.935 - 13345.513: 97.3099% ( 10) 00:27:33.034 13345.513 - 13405.091: 97.3985% ( 11) 00:27:33.034 13405.091 - 13464.669: 97.4630% ( 8) 00:27:33.034 13464.669 - 13524.247: 97.5435% ( 10) 00:27:33.034 13524.247 - 13583.825: 97.6240% ( 10) 00:27:33.034 13583.825 - 13643.404: 97.7126% ( 11) 00:27:33.034 13643.404 - 13702.982: 97.7851% ( 9) 00:27:33.034 13702.982 - 13762.560: 97.8657% ( 10) 00:27:33.034 13762.560 - 13822.138: 97.9381% ( 9) 00:27:33.034 13822.138 - 13881.716: 98.0106% ( 9) 00:27:33.034 13881.716 - 13941.295: 98.0912% ( 10) 00:27:33.034 13941.295 - 14000.873: 98.1717% ( 10) 00:27:33.034 14000.873 - 14060.451: 98.2442% ( 9) 00:27:33.034 14060.451 - 14120.029: 98.3167% ( 9) 00:27:33.034 14120.029 - 14179.607: 98.3811% ( 8) 00:27:33.034 14179.607 - 14239.185: 98.4456% ( 8) 00:27:33.034 14239.185 - 14298.764: 98.5100% ( 8) 00:27:33.034 14298.764 - 14358.342: 98.5664% ( 7) 00:27:33.034 14358.342 - 14417.920: 98.6227% ( 7) 00:27:33.034 14417.920 - 14477.498: 98.6952% ( 9) 00:27:33.034 14477.498 - 14537.076: 98.7597% ( 8) 00:27:33.034 14537.076 - 14596.655: 98.8160% ( 7) 00:27:33.034 14596.655 - 14656.233: 98.8724% ( 7) 00:27:33.034 14656.233 - 14715.811: 98.8966% ( 3) 00:27:33.034 14715.811 - 14775.389: 98.9127% ( 2) 00:27:33.034 14775.389 - 14834.967: 98.9369% ( 3) 00:27:33.034 14834.967 - 14894.545: 98.9530% ( 2) 00:27:33.034 14894.545 - 14954.124: 98.9691% ( 2) 00:27:33.034 18588.393 - 18707.549: 98.9852% ( 2) 00:27:33.034 18707.549 - 18826.705: 99.0093% ( 3) 00:27:33.034 18826.705 - 18945.862: 99.0174% ( 1) 00:27:33.034 18945.862 - 19065.018: 99.0416% ( 3) 00:27:33.034 19065.018 - 19184.175: 99.0657% ( 3) 00:27:33.034 19184.175 - 19303.331: 99.0899% ( 3) 00:27:33.034 19303.331 - 19422.487: 99.1060% ( 2) 00:27:33.034 19422.487 - 19541.644: 99.1302% ( 3) 00:27:33.034 19541.644 - 19660.800: 99.1543% ( 3) 00:27:33.034 19660.800 - 19779.956: 99.1704% ( 2) 00:27:33.034 19779.956 - 19899.113: 99.1946% ( 3) 00:27:33.034 19899.113 - 20018.269: 99.2188% ( 3) 00:27:33.034 20018.269 - 20137.425: 99.2268% ( 1) 00:27:33.034 20137.425 - 20256.582: 99.2510% ( 3) 00:27:33.034 20256.582 - 20375.738: 99.2751% ( 3) 00:27:33.034 20375.738 - 20494.895: 99.2912% ( 2) 00:27:33.034 20494.895 - 20614.051: 99.3154% ( 3) 00:27:33.034 20614.051 - 20733.207: 99.3396% ( 3) 00:27:33.034 20733.207 - 20852.364: 99.3637% ( 3) 00:27:33.034 20852.364 - 20971.520: 99.3798% ( 2) 00:27:33.034 20971.520 - 21090.676: 99.4040% ( 3) 00:27:33.034 21090.676 - 21209.833: 99.4282% ( 3) 00:27:33.034 21209.833 - 21328.989: 99.4443% ( 2) 00:27:33.034 21328.989 - 21448.145: 99.4684% ( 3) 00:27:33.034 21448.145 - 21567.302: 99.4845% ( 2) 00:27:33.034 27048.495 - 27167.651: 99.5006% ( 2) 00:27:33.034 27167.651 - 27286.807: 99.5168% ( 2) 00:27:33.034 27286.807 - 27405.964: 99.5409% ( 3) 00:27:33.034 27405.964 - 27525.120: 99.5570% ( 2) 00:27:33.034 27525.120 - 27644.276: 99.5812% ( 3) 00:27:33.034 27644.276 - 27763.433: 99.6053% ( 3) 00:27:33.035 27763.433 - 27882.589: 99.6215% ( 2) 00:27:33.035 27882.589 - 28001.745: 99.6456% ( 3) 00:27:33.035 28001.745 - 28120.902: 99.6698% ( 3) 00:27:33.035 28120.902 - 28240.058: 99.6939% ( 3) 00:27:33.035 28240.058 - 28359.215: 99.7181% ( 3) 00:27:33.035 28359.215 - 28478.371: 99.7342% ( 2) 00:27:33.035 28478.371 - 28597.527: 99.7584% ( 3) 00:27:33.035 28597.527 - 28716.684: 99.7825% ( 3) 00:27:33.035 28716.684 - 28835.840: 99.7986% ( 2) 00:27:33.035 28835.840 - 28954.996: 99.8228% ( 3) 00:27:33.035 28954.996 - 29074.153: 99.8389% ( 2) 00:27:33.035 29074.153 - 29193.309: 99.8631% ( 3) 00:27:33.035 29193.309 - 29312.465: 99.8872% ( 3) 00:27:33.035 29312.465 - 29431.622: 99.9034% ( 2) 00:27:33.035 29431.622 - 29550.778: 99.9275% ( 3) 00:27:33.035 29550.778 - 29669.935: 99.9517% ( 3) 00:27:33.035 29669.935 - 29789.091: 99.9678% ( 2) 00:27:33.035 29789.091 - 29908.247: 99.9919% ( 3) 00:27:33.035 29908.247 - 30027.404: 100.0000% ( 1) 00:27:33.035 00:27:33.035 07:36:11 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:27:34.408 Initializing NVMe Controllers 00:27:34.408 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:27:34.408 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:27:34.408 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:27:34.408 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:27:34.408 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:27:34.408 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:27:34.408 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:27:34.408 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:27:34.408 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:27:34.408 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:27:34.408 Initialization complete. Launching workers. 00:27:34.408 ======================================================== 00:27:34.408 Latency(us) 00:27:34.408 Device Information : IOPS MiB/s Average min max 00:27:34.408 PCIE (0000:00:10.0) NSID 1 from core 0: 10865.14 127.33 11801.70 9588.29 51944.01 00:27:34.408 PCIE (0000:00:11.0) NSID 1 from core 0: 10865.14 127.33 11767.87 9768.65 48771.23 00:27:34.408 PCIE (0000:00:13.0) NSID 1 from core 0: 10865.14 127.33 11733.70 9751.25 46505.68 00:27:34.408 PCIE (0000:00:12.0) NSID 1 from core 0: 10865.14 127.33 11698.50 9841.12 43347.70 00:27:34.408 PCIE (0000:00:12.0) NSID 2 from core 0: 10865.14 127.33 11663.03 9716.74 40280.37 00:27:34.408 PCIE (0000:00:12.0) NSID 3 from core 0: 10865.14 127.33 11626.27 9739.24 37049.21 00:27:34.408 ======================================================== 00:27:34.409 Total : 65190.82 763.95 11715.18 9588.29 51944.01 00:27:34.409 00:27:34.409 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:27:34.409 ================================================================================= 00:27:34.409 1.00000% : 9949.556us 00:27:34.409 10.00000% : 10545.338us 00:27:34.409 25.00000% : 10902.807us 00:27:34.409 50.00000% : 11379.433us 00:27:34.409 75.00000% : 11915.636us 00:27:34.409 90.00000% : 12451.840us 00:27:34.409 95.00000% : 12868.887us 00:27:34.409 98.00000% : 14537.076us 00:27:34.409 99.00000% : 38844.975us 00:27:34.409 99.50000% : 49330.735us 00:27:34.409 99.90000% : 51475.549us 00:27:34.409 99.99000% : 51952.175us 00:27:34.409 99.99900% : 51952.175us 00:27:34.409 99.99990% : 51952.175us 00:27:34.409 99.99999% : 51952.175us 00:27:34.409 00:27:34.409 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:27:34.409 ================================================================================= 00:27:34.409 1.00000% : 10187.869us 00:27:34.409 10.00000% : 10664.495us 00:27:34.409 25.00000% : 11021.964us 00:27:34.409 50.00000% : 11379.433us 00:27:34.409 75.00000% : 11796.480us 00:27:34.409 90.00000% : 12273.105us 00:27:34.409 95.00000% : 12570.996us 00:27:34.409 98.00000% : 14417.920us 00:27:34.409 99.00000% : 37653.411us 00:27:34.409 99.50000% : 46470.982us 00:27:34.409 99.90000% : 48377.484us 00:27:34.409 99.99000% : 48854.109us 00:27:34.409 99.99900% : 48854.109us 00:27:34.409 99.99990% : 48854.109us 00:27:34.409 99.99999% : 48854.109us 00:27:34.409 00:27:34.409 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:27:34.409 ================================================================================= 00:27:34.409 1.00000% : 10187.869us 00:27:34.409 10.00000% : 10664.495us 00:27:34.409 25.00000% : 11021.964us 00:27:34.409 50.00000% : 11319.855us 00:27:34.409 75.00000% : 11796.480us 00:27:34.409 90.00000% : 12332.684us 00:27:34.409 95.00000% : 12630.575us 00:27:34.409 98.00000% : 14179.607us 00:27:34.409 99.00000% : 35508.596us 00:27:34.409 99.50000% : 44326.167us 00:27:34.409 99.90000% : 46232.669us 00:27:34.409 99.99000% : 46470.982us 00:27:34.409 99.99900% : 46709.295us 00:27:34.409 99.99990% : 46709.295us 00:27:34.409 99.99999% : 46709.295us 00:27:34.409 00:27:34.409 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:27:34.409 ================================================================================= 00:27:34.409 1.00000% : 10187.869us 00:27:34.409 10.00000% : 10664.495us 00:27:34.409 25.00000% : 10962.385us 00:27:34.409 50.00000% : 11319.855us 00:27:34.409 75.00000% : 11796.480us 00:27:34.409 90.00000% : 12273.105us 00:27:34.409 95.00000% : 12570.996us 00:27:34.409 98.00000% : 14179.607us 00:27:34.409 99.00000% : 32410.531us 00:27:34.409 99.50000% : 40989.789us 00:27:34.409 99.90000% : 42896.291us 00:27:34.409 99.99000% : 43372.916us 00:27:34.409 99.99900% : 43372.916us 00:27:34.409 99.99990% : 43372.916us 00:27:34.409 99.99999% : 43372.916us 00:27:34.409 00:27:34.409 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:27:34.409 ================================================================================= 00:27:34.409 1.00000% : 10187.869us 00:27:34.409 10.00000% : 10664.495us 00:27:34.409 25.00000% : 10962.385us 00:27:34.409 50.00000% : 11379.433us 00:27:34.409 75.00000% : 11796.480us 00:27:34.409 90.00000% : 12273.105us 00:27:34.409 95.00000% : 12630.575us 00:27:34.409 98.00000% : 14417.920us 00:27:34.409 99.00000% : 29431.622us 00:27:34.409 99.50000% : 37891.724us 00:27:34.409 99.90000% : 40036.538us 00:27:34.409 99.99000% : 40274.851us 00:27:34.409 99.99900% : 40513.164us 00:27:34.409 99.99990% : 40513.164us 00:27:34.409 99.99999% : 40513.164us 00:27:34.409 00:27:34.409 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:27:34.409 ================================================================================= 00:27:34.409 1.00000% : 10187.869us 00:27:34.409 10.00000% : 10664.495us 00:27:34.409 25.00000% : 11021.964us 00:27:34.409 50.00000% : 11379.433us 00:27:34.409 75.00000% : 11856.058us 00:27:34.409 90.00000% : 12273.105us 00:27:34.409 95.00000% : 12630.575us 00:27:34.409 98.00000% : 13583.825us 00:27:34.409 99.00000% : 26452.713us 00:27:34.409 99.50000% : 34555.345us 00:27:34.409 99.90000% : 36700.160us 00:27:34.409 99.99000% : 37176.785us 00:27:34.409 99.99900% : 37176.785us 00:27:34.409 99.99990% : 37176.785us 00:27:34.409 99.99999% : 37176.785us 00:27:34.409 00:27:34.409 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:27:34.409 ============================================================================== 00:27:34.409 Range in us Cumulative IO count 00:27:34.409 9532.509 - 9592.087: 0.0276% ( 3) 00:27:34.409 9592.087 - 9651.665: 0.0919% ( 7) 00:27:34.409 9651.665 - 9711.244: 0.1195% ( 3) 00:27:34.409 9711.244 - 9770.822: 0.1930% ( 8) 00:27:34.409 9770.822 - 9830.400: 0.3676% ( 19) 00:27:34.409 9830.400 - 9889.978: 0.6342% ( 29) 00:27:34.409 9889.978 - 9949.556: 1.0018% ( 40) 00:27:34.409 9949.556 - 10009.135: 1.6912% ( 75) 00:27:34.409 10009.135 - 10068.713: 2.3713% ( 74) 00:27:34.409 10068.713 - 10128.291: 3.0239% ( 71) 00:27:34.409 10128.291 - 10187.869: 3.8787% ( 93) 00:27:34.409 10187.869 - 10247.447: 4.8805% ( 109) 00:27:34.409 10247.447 - 10307.025: 5.6710% ( 86) 00:27:34.409 10307.025 - 10366.604: 6.6544% ( 107) 00:27:34.409 10366.604 - 10426.182: 8.0699% ( 154) 00:27:34.409 10426.182 - 10485.760: 9.2004% ( 123) 00:27:34.409 10485.760 - 10545.338: 10.7169% ( 165) 00:27:34.409 10545.338 - 10604.916: 12.3713% ( 180) 00:27:34.409 10604.916 - 10664.495: 14.3566% ( 216) 00:27:34.409 10664.495 - 10724.073: 16.0754% ( 187) 00:27:34.409 10724.073 - 10783.651: 18.8051% ( 297) 00:27:34.409 10783.651 - 10843.229: 22.7941% ( 434) 00:27:34.409 10843.229 - 10902.807: 26.6176% ( 416) 00:27:34.409 10902.807 - 10962.385: 29.7702% ( 343) 00:27:34.409 10962.385 - 11021.964: 33.4191% ( 397) 00:27:34.409 11021.964 - 11081.542: 36.4614% ( 331) 00:27:34.409 11081.542 - 11141.120: 40.4412% ( 433) 00:27:34.409 11141.120 - 11200.698: 43.9338% ( 380) 00:27:34.409 11200.698 - 11260.276: 46.9026% ( 323) 00:27:34.409 11260.276 - 11319.855: 49.8713% ( 323) 00:27:34.409 11319.855 - 11379.433: 53.2445% ( 367) 00:27:34.409 11379.433 - 11439.011: 56.3327% ( 336) 00:27:34.409 11439.011 - 11498.589: 59.1085% ( 302) 00:27:34.409 11498.589 - 11558.167: 61.6544% ( 277) 00:27:34.409 11558.167 - 11617.745: 64.5037% ( 310) 00:27:34.409 11617.745 - 11677.324: 67.2610% ( 300) 00:27:34.409 11677.324 - 11736.902: 69.8070% ( 277) 00:27:34.409 11736.902 - 11796.480: 72.3621% ( 278) 00:27:34.409 11796.480 - 11856.058: 74.8713% ( 273) 00:27:34.409 11856.058 - 11915.636: 77.0221% ( 234) 00:27:34.409 11915.636 - 11975.215: 79.4761% ( 267) 00:27:34.409 11975.215 - 12034.793: 81.4614% ( 216) 00:27:34.409 12034.793 - 12094.371: 83.0239% ( 170) 00:27:34.409 12094.371 - 12153.949: 84.6875% ( 181) 00:27:34.409 12153.949 - 12213.527: 85.9559% ( 138) 00:27:34.409 12213.527 - 12273.105: 87.3162% ( 148) 00:27:34.409 12273.105 - 12332.684: 88.4467% ( 123) 00:27:34.409 12332.684 - 12392.262: 89.6415% ( 130) 00:27:34.409 12392.262 - 12451.840: 90.5790% ( 102) 00:27:34.409 12451.840 - 12511.418: 91.5165% ( 102) 00:27:34.409 12511.418 - 12570.996: 92.2243% ( 77) 00:27:34.409 12570.996 - 12630.575: 92.9228% ( 76) 00:27:34.409 12630.575 - 12690.153: 93.6213% ( 76) 00:27:34.409 12690.153 - 12749.731: 94.1544% ( 58) 00:27:34.409 12749.731 - 12809.309: 94.6140% ( 50) 00:27:34.409 12809.309 - 12868.887: 95.0551% ( 48) 00:27:34.409 12868.887 - 12928.465: 95.4320% ( 41) 00:27:34.409 12928.465 - 12988.044: 95.7629% ( 36) 00:27:34.409 12988.044 - 13047.622: 96.0570% ( 32) 00:27:34.409 13047.622 - 13107.200: 96.3143% ( 28) 00:27:34.409 13107.200 - 13166.778: 96.5441% ( 25) 00:27:34.409 13166.778 - 13226.356: 96.8290% ( 31) 00:27:34.409 13226.356 - 13285.935: 96.9853% ( 17) 00:27:34.409 13285.935 - 13345.513: 97.1140% ( 14) 00:27:34.409 13345.513 - 13405.091: 97.2059% ( 10) 00:27:34.409 13405.091 - 13464.669: 97.2794% ( 8) 00:27:34.409 13464.669 - 13524.247: 97.2978% ( 2) 00:27:34.409 13524.247 - 13583.825: 97.3438% ( 5) 00:27:34.409 13583.825 - 13643.404: 97.3805% ( 4) 00:27:34.409 13643.404 - 13702.982: 97.3989% ( 2) 00:27:34.409 13702.982 - 13762.560: 97.4173% ( 2) 00:27:34.409 13762.560 - 13822.138: 97.4449% ( 3) 00:27:34.409 13822.138 - 13881.716: 97.4632% ( 2) 00:27:34.409 13881.716 - 13941.295: 97.4816% ( 2) 00:27:34.409 13941.295 - 14000.873: 97.5184% ( 4) 00:27:34.409 14000.873 - 14060.451: 97.5551% ( 4) 00:27:34.409 14060.451 - 14120.029: 97.6103% ( 6) 00:27:34.409 14120.029 - 14179.607: 97.6379% ( 3) 00:27:34.409 14179.607 - 14239.185: 97.6930% ( 6) 00:27:34.409 14239.185 - 14298.764: 97.7482% ( 6) 00:27:34.409 14298.764 - 14358.342: 97.8125% ( 7) 00:27:34.409 14358.342 - 14417.920: 97.8768% ( 7) 00:27:34.409 14417.920 - 14477.498: 97.9504% ( 8) 00:27:34.409 14477.498 - 14537.076: 98.0239% ( 8) 00:27:34.409 14537.076 - 14596.655: 98.0699% ( 5) 00:27:34.409 14596.655 - 14656.233: 98.0790% ( 1) 00:27:34.409 14954.124 - 15013.702: 98.1066% ( 3) 00:27:34.409 15013.702 - 15073.280: 98.1618% ( 6) 00:27:34.409 15073.280 - 15132.858: 98.1801% ( 2) 00:27:34.409 15132.858 - 15192.436: 98.2077% ( 3) 00:27:34.409 15192.436 - 15252.015: 98.2537% ( 5) 00:27:34.409 15252.015 - 15371.171: 98.3640% ( 12) 00:27:34.409 15371.171 - 15490.327: 98.4559% ( 10) 00:27:34.409 15490.327 - 15609.484: 98.5386% ( 9) 00:27:34.409 15609.484 - 15728.640: 98.6397% ( 11) 00:27:34.409 15728.640 - 15847.796: 98.6857% ( 5) 00:27:34.409 16205.265 - 16324.422: 98.7224% ( 4) 00:27:34.409 16324.422 - 16443.578: 98.7408% ( 2) 00:27:34.409 16443.578 - 16562.735: 98.7776% ( 4) 00:27:34.409 16562.735 - 16681.891: 98.8143% ( 4) 00:27:34.409 16681.891 - 16801.047: 98.8235% ( 1) 00:27:34.409 37653.411 - 37891.724: 98.8511% ( 3) 00:27:34.409 37891.724 - 38130.036: 98.8879% ( 4) 00:27:34.409 38130.036 - 38368.349: 98.9246% ( 4) 00:27:34.409 38368.349 - 38606.662: 98.9614% ( 4) 00:27:34.409 38606.662 - 38844.975: 99.0165% ( 6) 00:27:34.410 38844.975 - 39083.287: 99.0533% ( 4) 00:27:34.410 39083.287 - 39321.600: 99.0993% ( 5) 00:27:34.410 39321.600 - 39559.913: 99.1360% ( 4) 00:27:34.410 39559.913 - 39798.225: 99.1820% ( 5) 00:27:34.410 39798.225 - 40036.538: 99.2279% ( 5) 00:27:34.410 40036.538 - 40274.851: 99.2647% ( 4) 00:27:34.410 40274.851 - 40513.164: 99.3107% ( 5) 00:27:34.410 40513.164 - 40751.476: 99.3750% ( 7) 00:27:34.410 40751.476 - 40989.789: 99.4026% ( 3) 00:27:34.410 40989.789 - 41228.102: 99.4118% ( 1) 00:27:34.410 48854.109 - 49092.422: 99.4577% ( 5) 00:27:34.410 49092.422 - 49330.735: 99.5037% ( 5) 00:27:34.410 49330.735 - 49569.047: 99.5404% ( 4) 00:27:34.410 49569.047 - 49807.360: 99.5956% ( 6) 00:27:34.410 49807.360 - 50045.673: 99.6415% ( 5) 00:27:34.410 50045.673 - 50283.985: 99.6691% ( 3) 00:27:34.410 50283.985 - 50522.298: 99.7151% ( 5) 00:27:34.410 50522.298 - 50760.611: 99.7610% ( 5) 00:27:34.410 50760.611 - 50998.924: 99.8162% ( 6) 00:27:34.410 50998.924 - 51237.236: 99.8529% ( 4) 00:27:34.410 51237.236 - 51475.549: 99.9081% ( 6) 00:27:34.410 51475.549 - 51713.862: 99.9540% ( 5) 00:27:34.410 51713.862 - 51952.175: 100.0000% ( 5) 00:27:34.410 00:27:34.410 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:27:34.410 ============================================================================== 00:27:34.410 Range in us Cumulative IO count 00:27:34.410 9711.244 - 9770.822: 0.0092% ( 1) 00:27:34.410 9770.822 - 9830.400: 0.0184% ( 1) 00:27:34.410 9830.400 - 9889.978: 0.0919% ( 8) 00:27:34.410 9889.978 - 9949.556: 0.2390% ( 16) 00:27:34.410 9949.556 - 10009.135: 0.3768% ( 15) 00:27:34.410 10009.135 - 10068.713: 0.5607% ( 20) 00:27:34.410 10068.713 - 10128.291: 0.8824% ( 35) 00:27:34.410 10128.291 - 10187.869: 1.5809% ( 76) 00:27:34.410 10187.869 - 10247.447: 2.1599% ( 63) 00:27:34.410 10247.447 - 10307.025: 3.0239% ( 94) 00:27:34.410 10307.025 - 10366.604: 4.1268% ( 120) 00:27:34.410 10366.604 - 10426.182: 5.3125% ( 129) 00:27:34.410 10426.182 - 10485.760: 6.7647% ( 158) 00:27:34.410 10485.760 - 10545.338: 8.2996% ( 167) 00:27:34.410 10545.338 - 10604.916: 9.8897% ( 173) 00:27:34.410 10604.916 - 10664.495: 11.4798% ( 173) 00:27:34.410 10664.495 - 10724.073: 13.0882% ( 175) 00:27:34.410 10724.073 - 10783.651: 15.2757% ( 238) 00:27:34.410 10783.651 - 10843.229: 17.8033% ( 275) 00:27:34.410 10843.229 - 10902.807: 20.8088% ( 327) 00:27:34.410 10902.807 - 10962.385: 24.3934% ( 390) 00:27:34.410 10962.385 - 11021.964: 28.6581% ( 464) 00:27:34.410 11021.964 - 11081.542: 32.9504% ( 467) 00:27:34.410 11081.542 - 11141.120: 37.2610% ( 469) 00:27:34.410 11141.120 - 11200.698: 41.4154% ( 452) 00:27:34.410 11200.698 - 11260.276: 45.6893% ( 465) 00:27:34.410 11260.276 - 11319.855: 49.8346% ( 451) 00:27:34.410 11319.855 - 11379.433: 53.7132% ( 422) 00:27:34.410 11379.433 - 11439.011: 57.2794% ( 388) 00:27:34.410 11439.011 - 11498.589: 60.5790% ( 359) 00:27:34.410 11498.589 - 11558.167: 64.0993% ( 383) 00:27:34.410 11558.167 - 11617.745: 67.0772% ( 324) 00:27:34.410 11617.745 - 11677.324: 69.8070% ( 297) 00:27:34.410 11677.324 - 11736.902: 72.6838% ( 313) 00:27:34.410 11736.902 - 11796.480: 75.5607% ( 313) 00:27:34.410 11796.480 - 11856.058: 78.2812% ( 296) 00:27:34.410 11856.058 - 11915.636: 80.6710% ( 260) 00:27:34.410 11915.636 - 11975.215: 82.9044% ( 243) 00:27:34.410 11975.215 - 12034.793: 84.7610% ( 202) 00:27:34.410 12034.793 - 12094.371: 86.5165% ( 191) 00:27:34.410 12094.371 - 12153.949: 88.0607% ( 168) 00:27:34.410 12153.949 - 12213.527: 89.4669% ( 153) 00:27:34.410 12213.527 - 12273.105: 90.6618% ( 130) 00:27:34.410 12273.105 - 12332.684: 91.7188% ( 115) 00:27:34.410 12332.684 - 12392.262: 92.8860% ( 127) 00:27:34.410 12392.262 - 12451.840: 93.8787% ( 108) 00:27:34.410 12451.840 - 12511.418: 94.6324% ( 82) 00:27:34.410 12511.418 - 12570.996: 95.1287% ( 54) 00:27:34.410 12570.996 - 12630.575: 95.5147% ( 42) 00:27:34.410 12630.575 - 12690.153: 95.8364% ( 35) 00:27:34.410 12690.153 - 12749.731: 96.0938% ( 28) 00:27:34.410 12749.731 - 12809.309: 96.3235% ( 25) 00:27:34.410 12809.309 - 12868.887: 96.4890% ( 18) 00:27:34.410 12868.887 - 12928.465: 96.6360% ( 16) 00:27:34.410 12928.465 - 12988.044: 96.7463% ( 12) 00:27:34.410 12988.044 - 13047.622: 96.8290% ( 9) 00:27:34.410 13047.622 - 13107.200: 96.8934% ( 7) 00:27:34.410 13107.200 - 13166.778: 96.9393% ( 5) 00:27:34.410 13166.778 - 13226.356: 96.9945% ( 6) 00:27:34.410 13226.356 - 13285.935: 97.0496% ( 6) 00:27:34.410 13285.935 - 13345.513: 97.0956% ( 5) 00:27:34.410 13345.513 - 13405.091: 97.1507% ( 6) 00:27:34.410 13405.091 - 13464.669: 97.2059% ( 6) 00:27:34.410 13464.669 - 13524.247: 97.2335% ( 3) 00:27:34.410 13524.247 - 13583.825: 97.2610% ( 3) 00:27:34.410 13583.825 - 13643.404: 97.2794% ( 2) 00:27:34.410 13643.404 - 13702.982: 97.2978% ( 2) 00:27:34.410 13702.982 - 13762.560: 97.3254% ( 3) 00:27:34.410 13762.560 - 13822.138: 97.3713% ( 5) 00:27:34.410 13822.138 - 13881.716: 97.4265% ( 6) 00:27:34.410 13881.716 - 13941.295: 97.4816% ( 6) 00:27:34.410 13941.295 - 14000.873: 97.6746% ( 21) 00:27:34.410 14000.873 - 14060.451: 97.7941% ( 13) 00:27:34.410 14060.451 - 14120.029: 97.8309% ( 4) 00:27:34.410 14120.029 - 14179.607: 97.8493% ( 2) 00:27:34.410 14179.607 - 14239.185: 97.8768% ( 3) 00:27:34.410 14239.185 - 14298.764: 97.9412% ( 7) 00:27:34.410 14298.764 - 14358.342: 97.9871% ( 5) 00:27:34.410 14358.342 - 14417.920: 98.0607% ( 8) 00:27:34.410 14417.920 - 14477.498: 98.1434% ( 9) 00:27:34.410 14477.498 - 14537.076: 98.2261% ( 9) 00:27:34.410 14537.076 - 14596.655: 98.3548% ( 14) 00:27:34.410 14596.655 - 14656.233: 98.3915% ( 4) 00:27:34.410 14656.233 - 14715.811: 98.4283% ( 4) 00:27:34.410 14715.811 - 14775.389: 98.4559% ( 3) 00:27:34.410 14775.389 - 14834.967: 98.4743% ( 2) 00:27:34.410 14834.967 - 14894.545: 98.4926% ( 2) 00:27:34.410 14894.545 - 14954.124: 98.5110% ( 2) 00:27:34.410 14954.124 - 15013.702: 98.5202% ( 1) 00:27:34.410 15252.015 - 15371.171: 98.5570% ( 4) 00:27:34.410 15371.171 - 15490.327: 98.6857% ( 14) 00:27:34.410 15490.327 - 15609.484: 98.7408% ( 6) 00:27:34.410 15609.484 - 15728.640: 98.7592% ( 2) 00:27:34.410 15728.640 - 15847.796: 98.7960% ( 4) 00:27:34.410 15847.796 - 15966.953: 98.8235% ( 3) 00:27:34.410 36461.847 - 36700.160: 98.8327% ( 1) 00:27:34.410 36700.160 - 36938.473: 98.8787% ( 5) 00:27:34.410 36938.473 - 37176.785: 98.9246% ( 5) 00:27:34.410 37176.785 - 37415.098: 98.9798% ( 6) 00:27:34.410 37415.098 - 37653.411: 99.0349% ( 6) 00:27:34.410 37653.411 - 37891.724: 99.0809% ( 5) 00:27:34.410 37891.724 - 38130.036: 99.1176% ( 4) 00:27:34.410 38130.036 - 38368.349: 99.1728% ( 6) 00:27:34.410 38368.349 - 38606.662: 99.2188% ( 5) 00:27:34.410 38606.662 - 38844.975: 99.2647% ( 5) 00:27:34.410 38844.975 - 39083.287: 99.3107% ( 5) 00:27:34.410 39083.287 - 39321.600: 99.3566% ( 5) 00:27:34.410 39321.600 - 39559.913: 99.4026% ( 5) 00:27:34.410 39559.913 - 39798.225: 99.4118% ( 1) 00:27:34.410 45994.356 - 46232.669: 99.4577% ( 5) 00:27:34.410 46232.669 - 46470.982: 99.5129% ( 6) 00:27:34.410 46470.982 - 46709.295: 99.5588% ( 5) 00:27:34.410 46709.295 - 46947.607: 99.6140% ( 6) 00:27:34.410 46947.607 - 47185.920: 99.6599% ( 5) 00:27:34.410 47185.920 - 47424.233: 99.7059% ( 5) 00:27:34.410 47424.233 - 47662.545: 99.7610% ( 6) 00:27:34.410 47662.545 - 47900.858: 99.8070% ( 5) 00:27:34.410 47900.858 - 48139.171: 99.8621% ( 6) 00:27:34.410 48139.171 - 48377.484: 99.9081% ( 5) 00:27:34.410 48377.484 - 48615.796: 99.9632% ( 6) 00:27:34.410 48615.796 - 48854.109: 100.0000% ( 4) 00:27:34.410 00:27:34.410 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:27:34.410 ============================================================================== 00:27:34.410 Range in us Cumulative IO count 00:27:34.410 9711.244 - 9770.822: 0.0092% ( 1) 00:27:34.410 9830.400 - 9889.978: 0.0184% ( 1) 00:27:34.410 9889.978 - 9949.556: 0.1011% ( 9) 00:27:34.410 9949.556 - 10009.135: 0.2757% ( 19) 00:27:34.410 10009.135 - 10068.713: 0.6066% ( 36) 00:27:34.410 10068.713 - 10128.291: 0.9743% ( 40) 00:27:34.410 10128.291 - 10187.869: 1.5441% ( 62) 00:27:34.410 10187.869 - 10247.447: 2.3529% ( 88) 00:27:34.410 10247.447 - 10307.025: 3.4007% ( 114) 00:27:34.410 10307.025 - 10366.604: 4.2371% ( 91) 00:27:34.410 10366.604 - 10426.182: 5.2114% ( 106) 00:27:34.410 10426.182 - 10485.760: 6.2500% ( 113) 00:27:34.410 10485.760 - 10545.338: 7.9228% ( 182) 00:27:34.410 10545.338 - 10604.916: 9.2463% ( 144) 00:27:34.410 10604.916 - 10664.495: 10.9099% ( 181) 00:27:34.410 10664.495 - 10724.073: 13.1342% ( 242) 00:27:34.410 10724.073 - 10783.651: 15.3768% ( 244) 00:27:34.410 10783.651 - 10843.229: 18.2353% ( 311) 00:27:34.410 10843.229 - 10902.807: 21.4062% ( 345) 00:27:34.410 10902.807 - 10962.385: 24.8346% ( 373) 00:27:34.410 10962.385 - 11021.964: 28.7224% ( 423) 00:27:34.410 11021.964 - 11081.542: 32.7390% ( 437) 00:27:34.410 11081.542 - 11141.120: 37.2610% ( 492) 00:27:34.410 11141.120 - 11200.698: 42.1691% ( 534) 00:27:34.410 11200.698 - 11260.276: 46.4430% ( 465) 00:27:34.410 11260.276 - 11319.855: 50.7629% ( 470) 00:27:34.410 11319.855 - 11379.433: 54.3382% ( 389) 00:27:34.410 11379.433 - 11439.011: 58.1342% ( 413) 00:27:34.410 11439.011 - 11498.589: 61.5257% ( 369) 00:27:34.410 11498.589 - 11558.167: 64.6967% ( 345) 00:27:34.410 11558.167 - 11617.745: 68.0423% ( 364) 00:27:34.410 11617.745 - 11677.324: 71.3695% ( 362) 00:27:34.410 11677.324 - 11736.902: 74.3842% ( 328) 00:27:34.410 11736.902 - 11796.480: 76.7279% ( 255) 00:27:34.410 11796.480 - 11856.058: 79.0165% ( 249) 00:27:34.410 11856.058 - 11915.636: 81.1489% ( 232) 00:27:34.410 11915.636 - 11975.215: 82.6654% ( 165) 00:27:34.410 11975.215 - 12034.793: 84.3290% ( 181) 00:27:34.410 12034.793 - 12094.371: 85.9007% ( 171) 00:27:34.410 12094.371 - 12153.949: 87.2702% ( 149) 00:27:34.410 12153.949 - 12213.527: 88.7500% ( 161) 00:27:34.410 12213.527 - 12273.105: 89.9724% ( 133) 00:27:34.410 12273.105 - 12332.684: 91.1857% ( 132) 00:27:34.410 12332.684 - 12392.262: 92.3254% ( 124) 00:27:34.410 12392.262 - 12451.840: 93.1801% ( 93) 00:27:34.410 12451.840 - 12511.418: 93.8511% ( 73) 00:27:34.411 12511.418 - 12570.996: 94.6415% ( 86) 00:27:34.411 12570.996 - 12630.575: 95.1103% ( 51) 00:27:34.411 12630.575 - 12690.153: 95.5331% ( 46) 00:27:34.411 12690.153 - 12749.731: 95.9467% ( 45) 00:27:34.411 12749.731 - 12809.309: 96.2316% ( 31) 00:27:34.411 12809.309 - 12868.887: 96.4430% ( 23) 00:27:34.411 12868.887 - 12928.465: 96.6268% ( 20) 00:27:34.411 12928.465 - 12988.044: 96.8107% ( 20) 00:27:34.411 12988.044 - 13047.622: 96.9945% ( 20) 00:27:34.411 13047.622 - 13107.200: 97.1048% ( 12) 00:27:34.411 13107.200 - 13166.778: 97.1691% ( 7) 00:27:34.411 13166.778 - 13226.356: 97.2059% ( 4) 00:27:34.411 13226.356 - 13285.935: 97.2426% ( 4) 00:27:34.411 13285.935 - 13345.513: 97.2702% ( 3) 00:27:34.411 13345.513 - 13405.091: 97.3162% ( 5) 00:27:34.411 13405.091 - 13464.669: 97.3346% ( 2) 00:27:34.411 13464.669 - 13524.247: 97.3529% ( 2) 00:27:34.411 13524.247 - 13583.825: 97.3897% ( 4) 00:27:34.411 13583.825 - 13643.404: 97.4173% ( 3) 00:27:34.411 13643.404 - 13702.982: 97.4357% ( 2) 00:27:34.411 13702.982 - 13762.560: 97.4816% ( 5) 00:27:34.411 13762.560 - 13822.138: 97.5276% ( 5) 00:27:34.411 13822.138 - 13881.716: 97.5919% ( 7) 00:27:34.411 13881.716 - 13941.295: 97.6562% ( 7) 00:27:34.411 13941.295 - 14000.873: 97.7206% ( 7) 00:27:34.411 14000.873 - 14060.451: 97.8125% ( 10) 00:27:34.411 14060.451 - 14120.029: 97.9136% ( 11) 00:27:34.411 14120.029 - 14179.607: 98.0607% ( 16) 00:27:34.411 14179.607 - 14239.185: 98.1250% ( 7) 00:27:34.411 14239.185 - 14298.764: 98.2169% ( 10) 00:27:34.411 14298.764 - 14358.342: 98.3180% ( 11) 00:27:34.411 14358.342 - 14417.920: 98.3824% ( 7) 00:27:34.411 14417.920 - 14477.498: 98.4743% ( 10) 00:27:34.411 14477.498 - 14537.076: 98.5110% ( 4) 00:27:34.411 14537.076 - 14596.655: 98.5386% ( 3) 00:27:34.411 14596.655 - 14656.233: 98.5846% ( 5) 00:27:34.411 14656.233 - 14715.811: 98.6305% ( 5) 00:27:34.411 14715.811 - 14775.389: 98.6765% ( 5) 00:27:34.411 14775.389 - 14834.967: 98.7224% ( 5) 00:27:34.411 14834.967 - 14894.545: 98.7500% ( 3) 00:27:34.411 14894.545 - 14954.124: 98.7776% ( 3) 00:27:34.411 14954.124 - 15013.702: 98.7960% ( 2) 00:27:34.411 15013.702 - 15073.280: 98.8143% ( 2) 00:27:34.411 15073.280 - 15132.858: 98.8235% ( 1) 00:27:34.411 34317.033 - 34555.345: 98.8327% ( 1) 00:27:34.411 34555.345 - 34793.658: 98.8879% ( 6) 00:27:34.411 34793.658 - 35031.971: 98.9338% ( 5) 00:27:34.411 35031.971 - 35270.284: 98.9798% ( 5) 00:27:34.411 35270.284 - 35508.596: 99.0349% ( 6) 00:27:34.411 35508.596 - 35746.909: 99.0717% ( 4) 00:27:34.411 35746.909 - 35985.222: 99.1176% ( 5) 00:27:34.411 35985.222 - 36223.535: 99.1636% ( 5) 00:27:34.411 36223.535 - 36461.847: 99.2096% ( 5) 00:27:34.411 36461.847 - 36700.160: 99.2555% ( 5) 00:27:34.411 36700.160 - 36938.473: 99.3015% ( 5) 00:27:34.411 36938.473 - 37176.785: 99.3382% ( 4) 00:27:34.411 37176.785 - 37415.098: 99.3934% ( 6) 00:27:34.411 37415.098 - 37653.411: 99.4118% ( 2) 00:27:34.411 43611.229 - 43849.542: 99.4485% ( 4) 00:27:34.411 43849.542 - 44087.855: 99.4945% ( 5) 00:27:34.411 44087.855 - 44326.167: 99.5496% ( 6) 00:27:34.411 44326.167 - 44564.480: 99.5956% ( 5) 00:27:34.411 44564.480 - 44802.793: 99.6415% ( 5) 00:27:34.411 44802.793 - 45041.105: 99.6875% ( 5) 00:27:34.411 45041.105 - 45279.418: 99.7335% ( 5) 00:27:34.411 45279.418 - 45517.731: 99.7886% ( 6) 00:27:34.411 45517.731 - 45756.044: 99.8346% ( 5) 00:27:34.411 45756.044 - 45994.356: 99.8897% ( 6) 00:27:34.411 45994.356 - 46232.669: 99.9357% ( 5) 00:27:34.411 46232.669 - 46470.982: 99.9908% ( 6) 00:27:34.411 46470.982 - 46709.295: 100.0000% ( 1) 00:27:34.411 00:27:34.411 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:27:34.411 ============================================================================== 00:27:34.411 Range in us Cumulative IO count 00:27:34.411 9830.400 - 9889.978: 0.0184% ( 2) 00:27:34.411 9889.978 - 9949.556: 0.0643% ( 5) 00:27:34.411 9949.556 - 10009.135: 0.2298% ( 18) 00:27:34.411 10009.135 - 10068.713: 0.4871% ( 28) 00:27:34.411 10068.713 - 10128.291: 0.8824% ( 43) 00:27:34.411 10128.291 - 10187.869: 1.3971% ( 56) 00:27:34.411 10187.869 - 10247.447: 2.2151% ( 89) 00:27:34.411 10247.447 - 10307.025: 3.2077% ( 108) 00:27:34.411 10307.025 - 10366.604: 4.2188% ( 110) 00:27:34.411 10366.604 - 10426.182: 5.4412% ( 133) 00:27:34.411 10426.182 - 10485.760: 6.6176% ( 128) 00:27:34.411 10485.760 - 10545.338: 7.7298% ( 121) 00:27:34.411 10545.338 - 10604.916: 9.1085% ( 150) 00:27:34.411 10604.916 - 10664.495: 10.8732% ( 192) 00:27:34.411 10664.495 - 10724.073: 13.0239% ( 234) 00:27:34.411 10724.073 - 10783.651: 15.2849% ( 246) 00:27:34.411 10783.651 - 10843.229: 17.8217% ( 276) 00:27:34.411 10843.229 - 10902.807: 21.4430% ( 394) 00:27:34.411 10902.807 - 10962.385: 25.3309% ( 423) 00:27:34.411 10962.385 - 11021.964: 29.3750% ( 440) 00:27:34.411 11021.964 - 11081.542: 33.6949% ( 470) 00:27:34.411 11081.542 - 11141.120: 37.5460% ( 419) 00:27:34.411 11141.120 - 11200.698: 41.9301% ( 477) 00:27:34.411 11200.698 - 11260.276: 46.1121% ( 455) 00:27:34.411 11260.276 - 11319.855: 50.1654% ( 441) 00:27:34.411 11319.855 - 11379.433: 54.0349% ( 421) 00:27:34.411 11379.433 - 11439.011: 57.3989% ( 366) 00:27:34.411 11439.011 - 11498.589: 60.5515% ( 343) 00:27:34.411 11498.589 - 11558.167: 63.9154% ( 366) 00:27:34.411 11558.167 - 11617.745: 66.8934% ( 324) 00:27:34.411 11617.745 - 11677.324: 69.6691% ( 302) 00:27:34.411 11677.324 - 11736.902: 72.6562% ( 325) 00:27:34.411 11736.902 - 11796.480: 75.2390% ( 281) 00:27:34.411 11796.480 - 11856.058: 77.6287% ( 260) 00:27:34.411 11856.058 - 11915.636: 79.9816% ( 256) 00:27:34.411 11915.636 - 11975.215: 82.0312% ( 223) 00:27:34.411 11975.215 - 12034.793: 83.9982% ( 214) 00:27:34.411 12034.793 - 12094.371: 85.7721% ( 193) 00:27:34.411 12094.371 - 12153.949: 87.6287% ( 202) 00:27:34.411 12153.949 - 12213.527: 89.1912% ( 170) 00:27:34.411 12213.527 - 12273.105: 90.5055% ( 143) 00:27:34.411 12273.105 - 12332.684: 91.6728% ( 127) 00:27:34.411 12332.684 - 12392.262: 92.8768% ( 131) 00:27:34.411 12392.262 - 12451.840: 93.8695% ( 108) 00:27:34.411 12451.840 - 12511.418: 94.7059% ( 91) 00:27:34.411 12511.418 - 12570.996: 95.3033% ( 65) 00:27:34.411 12570.996 - 12630.575: 95.7077% ( 44) 00:27:34.411 12630.575 - 12690.153: 96.0110% ( 33) 00:27:34.411 12690.153 - 12749.731: 96.2684% ( 28) 00:27:34.411 12749.731 - 12809.309: 96.5349% ( 29) 00:27:34.411 12809.309 - 12868.887: 96.8199% ( 31) 00:27:34.411 12868.887 - 12928.465: 96.9853% ( 18) 00:27:34.411 12928.465 - 12988.044: 97.0772% ( 10) 00:27:34.411 12988.044 - 13047.622: 97.1507% ( 8) 00:27:34.411 13047.622 - 13107.200: 97.2151% ( 7) 00:27:34.411 13107.200 - 13166.778: 97.3162% ( 11) 00:27:34.411 13166.778 - 13226.356: 97.4081% ( 10) 00:27:34.411 13226.356 - 13285.935: 97.4540% ( 5) 00:27:34.411 13285.935 - 13345.513: 97.4724% ( 2) 00:27:34.411 13345.513 - 13405.091: 97.4908% ( 2) 00:27:34.411 13405.091 - 13464.669: 97.5184% ( 3) 00:27:34.411 13464.669 - 13524.247: 97.5368% ( 2) 00:27:34.411 13524.247 - 13583.825: 97.5643% ( 3) 00:27:34.411 13583.825 - 13643.404: 97.5827% ( 2) 00:27:34.411 13643.404 - 13702.982: 97.6011% ( 2) 00:27:34.411 13702.982 - 13762.560: 97.6654% ( 7) 00:27:34.411 13762.560 - 13822.138: 97.7206% ( 6) 00:27:34.411 13822.138 - 13881.716: 97.7665% ( 5) 00:27:34.411 13881.716 - 13941.295: 97.8125% ( 5) 00:27:34.411 13941.295 - 14000.873: 97.8401% ( 3) 00:27:34.411 14000.873 - 14060.451: 97.8768% ( 4) 00:27:34.411 14060.451 - 14120.029: 97.9779% ( 11) 00:27:34.411 14120.029 - 14179.607: 98.0699% ( 10) 00:27:34.411 14179.607 - 14239.185: 98.1434% ( 8) 00:27:34.411 14239.185 - 14298.764: 98.2077% ( 7) 00:27:34.411 14298.764 - 14358.342: 98.2904% ( 9) 00:27:34.411 14358.342 - 14417.920: 98.3640% ( 8) 00:27:34.411 14417.920 - 14477.498: 98.4926% ( 14) 00:27:34.411 14477.498 - 14537.076: 98.5662% ( 8) 00:27:34.411 14537.076 - 14596.655: 98.6121% ( 5) 00:27:34.411 14596.655 - 14656.233: 98.6489% ( 4) 00:27:34.411 14656.233 - 14715.811: 98.6857% ( 4) 00:27:34.411 14715.811 - 14775.389: 98.7316% ( 5) 00:27:34.411 14775.389 - 14834.967: 98.7684% ( 4) 00:27:34.411 14834.967 - 14894.545: 98.7868% ( 2) 00:27:34.411 14894.545 - 14954.124: 98.8143% ( 3) 00:27:34.411 14954.124 - 15013.702: 98.8235% ( 1) 00:27:34.411 31457.280 - 31695.593: 98.8511% ( 3) 00:27:34.411 31695.593 - 31933.905: 98.9062% ( 6) 00:27:34.411 31933.905 - 32172.218: 98.9522% ( 5) 00:27:34.411 32172.218 - 32410.531: 99.0074% ( 6) 00:27:34.411 32410.531 - 32648.844: 99.0533% ( 5) 00:27:34.411 32648.844 - 32887.156: 99.0993% ( 5) 00:27:34.411 32887.156 - 33125.469: 99.1360% ( 4) 00:27:34.411 33125.469 - 33363.782: 99.1820% ( 5) 00:27:34.411 33363.782 - 33602.095: 99.2371% ( 6) 00:27:34.411 33602.095 - 33840.407: 99.2831% ( 5) 00:27:34.411 33840.407 - 34078.720: 99.3290% ( 5) 00:27:34.411 34078.720 - 34317.033: 99.3750% ( 5) 00:27:34.411 34317.033 - 34555.345: 99.4118% ( 4) 00:27:34.411 40513.164 - 40751.476: 99.4669% ( 6) 00:27:34.411 40751.476 - 40989.789: 99.5129% ( 5) 00:27:34.411 40989.789 - 41228.102: 99.5680% ( 6) 00:27:34.411 41228.102 - 41466.415: 99.6140% ( 5) 00:27:34.411 41466.415 - 41704.727: 99.6691% ( 6) 00:27:34.411 41704.727 - 41943.040: 99.7151% ( 5) 00:27:34.411 41943.040 - 42181.353: 99.7610% ( 5) 00:27:34.411 42181.353 - 42419.665: 99.8070% ( 5) 00:27:34.411 42419.665 - 42657.978: 99.8621% ( 6) 00:27:34.411 42657.978 - 42896.291: 99.9081% ( 5) 00:27:34.411 42896.291 - 43134.604: 99.9449% ( 4) 00:27:34.411 43134.604 - 43372.916: 100.0000% ( 6) 00:27:34.411 00:27:34.411 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:27:34.411 ============================================================================== 00:27:34.411 Range in us Cumulative IO count 00:27:34.411 9711.244 - 9770.822: 0.0092% ( 1) 00:27:34.411 9770.822 - 9830.400: 0.0276% ( 2) 00:27:34.411 9830.400 - 9889.978: 0.0368% ( 1) 00:27:34.411 9889.978 - 9949.556: 0.1379% ( 11) 00:27:34.411 9949.556 - 10009.135: 0.3033% ( 18) 00:27:34.411 10009.135 - 10068.713: 0.5974% ( 32) 00:27:34.411 10068.713 - 10128.291: 0.9467% ( 38) 00:27:34.412 10128.291 - 10187.869: 1.3879% ( 48) 00:27:34.412 10187.869 - 10247.447: 2.0772% ( 75) 00:27:34.412 10247.447 - 10307.025: 3.1342% ( 115) 00:27:34.412 10307.025 - 10366.604: 4.1544% ( 111) 00:27:34.412 10366.604 - 10426.182: 5.0551% ( 98) 00:27:34.412 10426.182 - 10485.760: 6.0662% ( 110) 00:27:34.412 10485.760 - 10545.338: 7.4632% ( 152) 00:27:34.412 10545.338 - 10604.916: 9.0717% ( 175) 00:27:34.412 10604.916 - 10664.495: 11.0478% ( 215) 00:27:34.412 10664.495 - 10724.073: 13.4467% ( 261) 00:27:34.412 10724.073 - 10783.651: 15.9467% ( 272) 00:27:34.412 10783.651 - 10843.229: 18.6397% ( 293) 00:27:34.412 10843.229 - 10902.807: 21.7831% ( 342) 00:27:34.412 10902.807 - 10962.385: 25.2482% ( 377) 00:27:34.412 10962.385 - 11021.964: 29.2004% ( 430) 00:27:34.412 11021.964 - 11081.542: 33.1066% ( 425) 00:27:34.412 11081.542 - 11141.120: 37.4265% ( 470) 00:27:34.412 11141.120 - 11200.698: 41.9761% ( 495) 00:27:34.412 11200.698 - 11260.276: 45.7537% ( 411) 00:27:34.412 11260.276 - 11319.855: 49.8621% ( 447) 00:27:34.412 11319.855 - 11379.433: 53.5754% ( 404) 00:27:34.412 11379.433 - 11439.011: 57.1599% ( 390) 00:27:34.412 11439.011 - 11498.589: 60.6801% ( 383) 00:27:34.412 11498.589 - 11558.167: 63.7776% ( 337) 00:27:34.412 11558.167 - 11617.745: 67.2426% ( 377) 00:27:34.412 11617.745 - 11677.324: 70.1654% ( 318) 00:27:34.412 11677.324 - 11736.902: 73.1618% ( 326) 00:27:34.412 11736.902 - 11796.480: 75.7721% ( 284) 00:27:34.412 11796.480 - 11856.058: 78.2077% ( 265) 00:27:34.412 11856.058 - 11915.636: 80.4320% ( 242) 00:27:34.412 11915.636 - 11975.215: 82.8493% ( 263) 00:27:34.412 11975.215 - 12034.793: 85.1379% ( 249) 00:27:34.412 12034.793 - 12094.371: 86.6636% ( 166) 00:27:34.412 12094.371 - 12153.949: 88.1066% ( 157) 00:27:34.412 12153.949 - 12213.527: 89.5221% ( 154) 00:27:34.412 12213.527 - 12273.105: 90.6893% ( 127) 00:27:34.412 12273.105 - 12332.684: 91.9577% ( 138) 00:27:34.412 12332.684 - 12392.262: 93.0331% ( 117) 00:27:34.412 12392.262 - 12451.840: 93.7684% ( 80) 00:27:34.412 12451.840 - 12511.418: 94.3934% ( 68) 00:27:34.412 12511.418 - 12570.996: 94.8070% ( 45) 00:27:34.412 12570.996 - 12630.575: 95.3033% ( 54) 00:27:34.412 12630.575 - 12690.153: 95.6710% ( 40) 00:27:34.412 12690.153 - 12749.731: 95.9467% ( 30) 00:27:34.412 12749.731 - 12809.309: 96.2040% ( 28) 00:27:34.412 12809.309 - 12868.887: 96.5257% ( 35) 00:27:34.412 12868.887 - 12928.465: 96.8107% ( 31) 00:27:34.412 12928.465 - 12988.044: 97.0037% ( 21) 00:27:34.412 12988.044 - 13047.622: 97.1967% ( 21) 00:27:34.412 13047.622 - 13107.200: 97.3162% ( 13) 00:27:34.412 13107.200 - 13166.778: 97.3989% ( 9) 00:27:34.412 13166.778 - 13226.356: 97.4816% ( 9) 00:27:34.412 13226.356 - 13285.935: 97.5276% ( 5) 00:27:34.412 13285.935 - 13345.513: 97.5551% ( 3) 00:27:34.412 13345.513 - 13405.091: 97.5919% ( 4) 00:27:34.412 13405.091 - 13464.669: 97.6103% ( 2) 00:27:34.412 13464.669 - 13524.247: 97.6287% ( 2) 00:27:34.412 13524.247 - 13583.825: 97.6471% ( 2) 00:27:34.412 13941.295 - 14000.873: 97.6562% ( 1) 00:27:34.412 14000.873 - 14060.451: 97.6930% ( 4) 00:27:34.412 14060.451 - 14120.029: 97.7390% ( 5) 00:27:34.412 14120.029 - 14179.607: 97.7849% ( 5) 00:27:34.412 14179.607 - 14239.185: 97.8125% ( 3) 00:27:34.412 14239.185 - 14298.764: 97.8952% ( 9) 00:27:34.412 14298.764 - 14358.342: 97.9596% ( 7) 00:27:34.412 14358.342 - 14417.920: 98.0607% ( 11) 00:27:34.412 14417.920 - 14477.498: 98.2261% ( 18) 00:27:34.412 14477.498 - 14537.076: 98.3548% ( 14) 00:27:34.412 14537.076 - 14596.655: 98.5294% ( 19) 00:27:34.412 14596.655 - 14656.233: 98.5938% ( 7) 00:27:34.412 14656.233 - 14715.811: 98.6489% ( 6) 00:27:34.412 14715.811 - 14775.389: 98.6949% ( 5) 00:27:34.412 14775.389 - 14834.967: 98.7316% ( 4) 00:27:34.412 14834.967 - 14894.545: 98.7684% ( 4) 00:27:34.412 14894.545 - 14954.124: 98.8143% ( 5) 00:27:34.412 14954.124 - 15013.702: 98.8235% ( 1) 00:27:34.412 28835.840 - 28954.996: 98.8603% ( 4) 00:27:34.412 28954.996 - 29074.153: 98.9062% ( 5) 00:27:34.412 29074.153 - 29193.309: 98.9522% ( 5) 00:27:34.412 29193.309 - 29312.465: 98.9890% ( 4) 00:27:34.412 29312.465 - 29431.622: 99.0074% ( 2) 00:27:34.412 29431.622 - 29550.778: 99.0257% ( 2) 00:27:34.412 29550.778 - 29669.935: 99.0441% ( 2) 00:27:34.412 29669.935 - 29789.091: 99.0625% ( 2) 00:27:34.412 29789.091 - 29908.247: 99.0901% ( 3) 00:27:34.412 29908.247 - 30027.404: 99.1085% ( 2) 00:27:34.412 30027.404 - 30146.560: 99.1268% ( 2) 00:27:34.412 30146.560 - 30265.716: 99.1544% ( 3) 00:27:34.412 30265.716 - 30384.873: 99.1728% ( 2) 00:27:34.412 30384.873 - 30504.029: 99.1912% ( 2) 00:27:34.412 30504.029 - 30742.342: 99.2371% ( 5) 00:27:34.412 30742.342 - 30980.655: 99.2831% ( 5) 00:27:34.412 30980.655 - 31218.967: 99.3290% ( 5) 00:27:34.412 31218.967 - 31457.280: 99.3842% ( 6) 00:27:34.412 31457.280 - 31695.593: 99.4118% ( 3) 00:27:34.412 37176.785 - 37415.098: 99.4210% ( 1) 00:27:34.412 37415.098 - 37653.411: 99.4669% ( 5) 00:27:34.412 37653.411 - 37891.724: 99.5129% ( 5) 00:27:34.412 37891.724 - 38130.036: 99.5588% ( 5) 00:27:34.412 38130.036 - 38368.349: 99.6140% ( 6) 00:27:34.412 38368.349 - 38606.662: 99.6599% ( 5) 00:27:34.412 38606.662 - 38844.975: 99.7059% ( 5) 00:27:34.412 38844.975 - 39083.287: 99.7518% ( 5) 00:27:34.412 39083.287 - 39321.600: 99.7978% ( 5) 00:27:34.412 39321.600 - 39559.913: 99.8529% ( 6) 00:27:34.412 39559.913 - 39798.225: 99.8989% ( 5) 00:27:34.412 39798.225 - 40036.538: 99.9449% ( 5) 00:27:34.412 40036.538 - 40274.851: 99.9908% ( 5) 00:27:34.412 40274.851 - 40513.164: 100.0000% ( 1) 00:27:34.412 00:27:34.412 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:27:34.412 ============================================================================== 00:27:34.412 Range in us Cumulative IO count 00:27:34.412 9711.244 - 9770.822: 0.0092% ( 1) 00:27:34.412 9830.400 - 9889.978: 0.0184% ( 1) 00:27:34.412 9889.978 - 9949.556: 0.1379% ( 13) 00:27:34.412 9949.556 - 10009.135: 0.3033% ( 18) 00:27:34.412 10009.135 - 10068.713: 0.5882% ( 31) 00:27:34.412 10068.713 - 10128.291: 0.9191% ( 36) 00:27:34.412 10128.291 - 10187.869: 1.3327% ( 45) 00:27:34.412 10187.869 - 10247.447: 2.2335% ( 98) 00:27:34.412 10247.447 - 10307.025: 3.1801% ( 103) 00:27:34.412 10307.025 - 10366.604: 4.2004% ( 111) 00:27:34.412 10366.604 - 10426.182: 5.1471% ( 103) 00:27:34.412 10426.182 - 10485.760: 6.3327% ( 129) 00:27:34.412 10485.760 - 10545.338: 8.0239% ( 184) 00:27:34.412 10545.338 - 10604.916: 9.3474% ( 144) 00:27:34.412 10604.916 - 10664.495: 10.7812% ( 156) 00:27:34.412 10664.495 - 10724.073: 13.0239% ( 244) 00:27:34.412 10724.073 - 10783.651: 15.1103% ( 227) 00:27:34.412 10783.651 - 10843.229: 17.7849% ( 291) 00:27:34.412 10843.229 - 10902.807: 20.9835% ( 348) 00:27:34.412 10902.807 - 10962.385: 24.7243% ( 407) 00:27:34.412 10962.385 - 11021.964: 28.6857% ( 431) 00:27:34.412 11021.964 - 11081.542: 33.0974% ( 480) 00:27:34.412 11081.542 - 11141.120: 38.1066% ( 545) 00:27:34.412 11141.120 - 11200.698: 42.4081% ( 468) 00:27:34.412 11200.698 - 11260.276: 45.9926% ( 390) 00:27:34.412 11260.276 - 11319.855: 49.8621% ( 421) 00:27:34.412 11319.855 - 11379.433: 53.7224% ( 420) 00:27:34.412 11379.433 - 11439.011: 57.1599% ( 374) 00:27:34.412 11439.011 - 11498.589: 60.9743% ( 415) 00:27:34.412 11498.589 - 11558.167: 64.0901% ( 339) 00:27:34.412 11558.167 - 11617.745: 66.8015% ( 295) 00:27:34.412 11617.745 - 11677.324: 69.2647% ( 268) 00:27:34.412 11677.324 - 11736.902: 72.0404% ( 302) 00:27:34.412 11736.902 - 11796.480: 74.7518% ( 295) 00:27:34.412 11796.480 - 11856.058: 77.2702% ( 274) 00:27:34.412 11856.058 - 11915.636: 79.3842% ( 230) 00:27:34.412 11915.636 - 11975.215: 81.8015% ( 263) 00:27:34.412 11975.215 - 12034.793: 83.9154% ( 230) 00:27:34.412 12034.793 - 12094.371: 85.8088% ( 206) 00:27:34.412 12094.371 - 12153.949: 87.7298% ( 209) 00:27:34.412 12153.949 - 12213.527: 89.0993% ( 149) 00:27:34.412 12213.527 - 12273.105: 90.5423% ( 157) 00:27:34.412 12273.105 - 12332.684: 92.1324% ( 173) 00:27:34.412 12332.684 - 12392.262: 93.0147% ( 96) 00:27:34.412 12392.262 - 12451.840: 93.6489% ( 69) 00:27:34.412 12451.840 - 12511.418: 94.1912% ( 59) 00:27:34.412 12511.418 - 12570.996: 94.6967% ( 55) 00:27:34.412 12570.996 - 12630.575: 95.1562% ( 50) 00:27:34.412 12630.575 - 12690.153: 95.5790% ( 46) 00:27:34.412 12690.153 - 12749.731: 96.0478% ( 51) 00:27:34.412 12749.731 - 12809.309: 96.3235% ( 30) 00:27:34.412 12809.309 - 12868.887: 96.5441% ( 24) 00:27:34.412 12868.887 - 12928.465: 96.8015% ( 28) 00:27:34.412 12928.465 - 12988.044: 96.9301% ( 14) 00:27:34.412 12988.044 - 13047.622: 97.0404% ( 12) 00:27:34.412 13047.622 - 13107.200: 97.0956% ( 6) 00:27:34.412 13107.200 - 13166.778: 97.1691% ( 8) 00:27:34.412 13166.778 - 13226.356: 97.3070% ( 15) 00:27:34.412 13226.356 - 13285.935: 97.4265% ( 13) 00:27:34.412 13285.935 - 13345.513: 97.5643% ( 15) 00:27:34.412 13345.513 - 13405.091: 97.7114% ( 16) 00:27:34.412 13405.091 - 13464.669: 97.8309% ( 13) 00:27:34.412 13464.669 - 13524.247: 97.9779% ( 16) 00:27:34.412 13524.247 - 13583.825: 98.0607% ( 9) 00:27:34.412 13583.825 - 13643.404: 98.0974% ( 4) 00:27:34.412 13643.404 - 13702.982: 98.1158% ( 2) 00:27:34.412 13702.982 - 13762.560: 98.1342% ( 2) 00:27:34.412 13762.560 - 13822.138: 98.1526% ( 2) 00:27:34.412 13822.138 - 13881.716: 98.1710% ( 2) 00:27:34.412 13881.716 - 13941.295: 98.1893% ( 2) 00:27:34.412 13941.295 - 14000.873: 98.2077% ( 2) 00:27:34.412 14000.873 - 14060.451: 98.2812% ( 8) 00:27:34.412 14060.451 - 14120.029: 98.3272% ( 5) 00:27:34.412 14120.029 - 14179.607: 98.3732% ( 5) 00:27:34.412 14179.607 - 14239.185: 98.4007% ( 3) 00:27:34.412 14239.185 - 14298.764: 98.4651% ( 7) 00:27:34.412 14298.764 - 14358.342: 98.4926% ( 3) 00:27:34.412 14358.342 - 14417.920: 98.5294% ( 4) 00:27:34.412 14417.920 - 14477.498: 98.5570% ( 3) 00:27:34.412 14477.498 - 14537.076: 98.5754% ( 2) 00:27:34.412 14537.076 - 14596.655: 98.5938% ( 2) 00:27:34.412 14596.655 - 14656.233: 98.6121% ( 2) 00:27:34.412 14656.233 - 14715.811: 98.6213% ( 1) 00:27:34.412 14715.811 - 14775.389: 98.6489% ( 3) 00:27:34.412 14775.389 - 14834.967: 98.6765% ( 3) 00:27:34.412 14834.967 - 14894.545: 98.6949% ( 2) 00:27:34.413 14894.545 - 14954.124: 98.7224% ( 3) 00:27:34.413 14954.124 - 15013.702: 98.7408% ( 2) 00:27:34.413 15013.702 - 15073.280: 98.7684% ( 3) 00:27:34.413 15073.280 - 15132.858: 98.7960% ( 3) 00:27:34.413 15132.858 - 15192.436: 98.8143% ( 2) 00:27:34.413 15192.436 - 15252.015: 98.8235% ( 1) 00:27:34.413 25737.775 - 25856.931: 98.8327% ( 1) 00:27:34.413 25856.931 - 25976.087: 98.8695% ( 4) 00:27:34.413 25976.087 - 26095.244: 98.9154% ( 5) 00:27:34.413 26095.244 - 26214.400: 98.9522% ( 4) 00:27:34.413 26214.400 - 26333.556: 98.9982% ( 5) 00:27:34.413 26333.556 - 26452.713: 99.0349% ( 4) 00:27:34.413 26452.713 - 26571.869: 99.0533% ( 2) 00:27:34.413 26571.869 - 26691.025: 99.0717% ( 2) 00:27:34.413 26691.025 - 26810.182: 99.0993% ( 3) 00:27:34.413 26810.182 - 26929.338: 99.1176% ( 2) 00:27:34.413 26929.338 - 27048.495: 99.1360% ( 2) 00:27:34.413 27048.495 - 27167.651: 99.1636% ( 3) 00:27:34.413 27167.651 - 27286.807: 99.1820% ( 2) 00:27:34.413 27286.807 - 27405.964: 99.2004% ( 2) 00:27:34.413 27405.964 - 27525.120: 99.2188% ( 2) 00:27:34.413 27525.120 - 27644.276: 99.2371% ( 2) 00:27:34.413 27644.276 - 27763.433: 99.2555% ( 2) 00:27:34.413 27763.433 - 27882.589: 99.2831% ( 3) 00:27:34.413 27882.589 - 28001.745: 99.3107% ( 3) 00:27:34.413 28001.745 - 28120.902: 99.3290% ( 2) 00:27:34.413 28120.902 - 28240.058: 99.3474% ( 2) 00:27:34.413 28240.058 - 28359.215: 99.3750% ( 3) 00:27:34.413 28359.215 - 28478.371: 99.3934% ( 2) 00:27:34.413 28478.371 - 28597.527: 99.4118% ( 2) 00:27:34.413 31933.905 - 32172.218: 99.4210% ( 1) 00:27:34.413 32172.218 - 32410.531: 99.4577% ( 4) 00:27:34.413 34317.033 - 34555.345: 99.5037% ( 5) 00:27:34.413 34555.345 - 34793.658: 99.5496% ( 5) 00:27:34.413 34793.658 - 35031.971: 99.5956% ( 5) 00:27:34.413 35031.971 - 35270.284: 99.6415% ( 5) 00:27:34.413 35270.284 - 35508.596: 99.6875% ( 5) 00:27:34.413 35508.596 - 35746.909: 99.7335% ( 5) 00:27:34.413 35746.909 - 35985.222: 99.7794% ( 5) 00:27:34.413 35985.222 - 36223.535: 99.8254% ( 5) 00:27:34.413 36223.535 - 36461.847: 99.8713% ( 5) 00:27:34.413 36461.847 - 36700.160: 99.9173% ( 5) 00:27:34.413 36700.160 - 36938.473: 99.9724% ( 6) 00:27:34.413 36938.473 - 37176.785: 100.0000% ( 3) 00:27:34.413 00:27:34.413 07:36:12 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:27:34.413 00:27:34.413 real 0m2.724s 00:27:34.413 user 0m2.308s 00:27:34.413 sys 0m0.310s 00:27:34.413 07:36:12 nvme.nvme_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:34.413 07:36:12 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:27:34.413 ************************************ 00:27:34.413 END TEST nvme_perf 00:27:34.413 ************************************ 00:27:34.413 07:36:12 nvme -- common/autotest_common.sh@1142 -- # return 0 00:27:34.413 07:36:12 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:27:34.413 07:36:12 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:27:34.413 07:36:12 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:34.413 07:36:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:27:34.413 ************************************ 00:27:34.413 START TEST nvme_hello_world 00:27:34.413 ************************************ 00:27:34.413 07:36:12 nvme.nvme_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:27:34.672 Initializing NVMe Controllers 00:27:34.672 Attached to 0000:00:10.0 00:27:34.672 Namespace ID: 1 size: 6GB 00:27:34.672 Attached to 0000:00:11.0 00:27:34.672 Namespace ID: 1 size: 5GB 00:27:34.672 Attached to 0000:00:13.0 00:27:34.672 Namespace ID: 1 size: 1GB 00:27:34.672 Attached to 0000:00:12.0 00:27:34.672 Namespace ID: 1 size: 4GB 00:27:34.672 Namespace ID: 2 size: 4GB 00:27:34.672 Namespace ID: 3 size: 4GB 00:27:34.672 Initialization complete. 00:27:34.672 INFO: using host memory buffer for IO 00:27:34.672 Hello world! 00:27:34.672 INFO: using host memory buffer for IO 00:27:34.672 Hello world! 00:27:34.672 INFO: using host memory buffer for IO 00:27:34.672 Hello world! 00:27:34.672 INFO: using host memory buffer for IO 00:27:34.672 Hello world! 00:27:34.672 INFO: using host memory buffer for IO 00:27:34.672 Hello world! 00:27:34.672 INFO: using host memory buffer for IO 00:27:34.672 Hello world! 00:27:34.672 00:27:34.672 real 0m0.274s 00:27:34.672 user 0m0.106s 00:27:34.672 sys 0m0.130s 00:27:34.672 07:36:13 nvme.nvme_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:34.672 07:36:13 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:27:34.672 ************************************ 00:27:34.672 END TEST nvme_hello_world 00:27:34.672 ************************************ 00:27:34.672 07:36:13 nvme -- common/autotest_common.sh@1142 -- # return 0 00:27:34.672 07:36:13 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:27:34.672 07:36:13 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:34.672 07:36:13 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:34.672 07:36:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:27:34.672 ************************************ 00:27:34.672 START TEST nvme_sgl 00:27:34.672 ************************************ 00:27:34.672 07:36:13 nvme.nvme_sgl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:27:34.930 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:27:34.930 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:27:34.930 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:27:35.188 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:27:35.188 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:27:35.188 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:27:35.188 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:27:35.188 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:27:35.188 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:27:35.188 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:27:35.188 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:27:35.188 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:27:35.188 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:27:35.188 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:27:35.188 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:27:35.188 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:27:35.188 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:27:35.188 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:27:35.188 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:27:35.188 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:27:35.188 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:27:35.188 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:27:35.188 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:27:35.188 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:27:35.188 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:27:35.188 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:27:35.188 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:27:35.188 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:27:35.188 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:27:35.188 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:27:35.188 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:27:35.188 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:27:35.188 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:27:35.188 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:27:35.188 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:27:35.188 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:27:35.188 NVMe Readv/Writev Request test 00:27:35.188 Attached to 0000:00:10.0 00:27:35.188 Attached to 0000:00:11.0 00:27:35.188 Attached to 0000:00:13.0 00:27:35.188 Attached to 0000:00:12.0 00:27:35.188 0000:00:10.0: build_io_request_2 test passed 00:27:35.188 0000:00:10.0: build_io_request_4 test passed 00:27:35.188 0000:00:10.0: build_io_request_5 test passed 00:27:35.188 0000:00:10.0: build_io_request_6 test passed 00:27:35.188 0000:00:10.0: build_io_request_7 test passed 00:27:35.188 0000:00:10.0: build_io_request_10 test passed 00:27:35.188 0000:00:11.0: build_io_request_2 test passed 00:27:35.188 0000:00:11.0: build_io_request_4 test passed 00:27:35.188 0000:00:11.0: build_io_request_5 test passed 00:27:35.188 0000:00:11.0: build_io_request_6 test passed 00:27:35.188 0000:00:11.0: build_io_request_7 test passed 00:27:35.188 0000:00:11.0: build_io_request_10 test passed 00:27:35.188 Cleaning up... 00:27:35.188 00:27:35.188 real 0m0.448s 00:27:35.188 user 0m0.221s 00:27:35.188 sys 0m0.182s 00:27:35.188 07:36:13 nvme.nvme_sgl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:35.188 07:36:13 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:27:35.188 ************************************ 00:27:35.188 END TEST nvme_sgl 00:27:35.188 ************************************ 00:27:35.189 07:36:13 nvme -- common/autotest_common.sh@1142 -- # return 0 00:27:35.189 07:36:13 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:27:35.189 07:36:13 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:35.189 07:36:13 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:35.189 07:36:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:27:35.189 ************************************ 00:27:35.189 START TEST nvme_e2edp 00:27:35.189 ************************************ 00:27:35.189 07:36:13 nvme.nvme_e2edp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:27:35.447 NVMe Write/Read with End-to-End data protection test 00:27:35.447 Attached to 0000:00:10.0 00:27:35.447 Attached to 0000:00:11.0 00:27:35.447 Attached to 0000:00:13.0 00:27:35.447 Attached to 0000:00:12.0 00:27:35.447 Cleaning up... 00:27:35.447 00:27:35.447 real 0m0.309s 00:27:35.447 user 0m0.119s 00:27:35.447 sys 0m0.150s 00:27:35.447 07:36:14 nvme.nvme_e2edp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:35.447 07:36:14 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:27:35.447 ************************************ 00:27:35.447 END TEST nvme_e2edp 00:27:35.447 ************************************ 00:27:35.447 07:36:14 nvme -- common/autotest_common.sh@1142 -- # return 0 00:27:35.447 07:36:14 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:27:35.447 07:36:14 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:35.447 07:36:14 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:35.447 07:36:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:27:35.447 ************************************ 00:27:35.447 START TEST nvme_reserve 00:27:35.447 ************************************ 00:27:35.447 07:36:14 nvme.nvme_reserve -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:27:36.014 ===================================================== 00:27:36.015 NVMe Controller at PCI bus 0, device 16, function 0 00:27:36.015 ===================================================== 00:27:36.015 Reservations: Not Supported 00:27:36.015 ===================================================== 00:27:36.015 NVMe Controller at PCI bus 0, device 17, function 0 00:27:36.015 ===================================================== 00:27:36.015 Reservations: Not Supported 00:27:36.015 ===================================================== 00:27:36.015 NVMe Controller at PCI bus 0, device 19, function 0 00:27:36.015 ===================================================== 00:27:36.015 Reservations: Not Supported 00:27:36.015 ===================================================== 00:27:36.015 NVMe Controller at PCI bus 0, device 18, function 0 00:27:36.015 ===================================================== 00:27:36.015 Reservations: Not Supported 00:27:36.015 Reservation test passed 00:27:36.015 00:27:36.015 real 0m0.340s 00:27:36.015 user 0m0.115s 00:27:36.015 sys 0m0.179s 00:27:36.015 07:36:14 nvme.nvme_reserve -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:36.015 ************************************ 00:27:36.015 END TEST nvme_reserve 00:27:36.015 07:36:14 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:27:36.015 ************************************ 00:27:36.015 07:36:14 nvme -- common/autotest_common.sh@1142 -- # return 0 00:27:36.015 07:36:14 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:27:36.015 07:36:14 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:36.015 07:36:14 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:36.015 07:36:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:27:36.015 ************************************ 00:27:36.015 START TEST nvme_err_injection 00:27:36.015 ************************************ 00:27:36.015 07:36:14 nvme.nvme_err_injection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:27:36.275 NVMe Error Injection test 00:27:36.275 Attached to 0000:00:10.0 00:27:36.275 Attached to 0000:00:11.0 00:27:36.275 Attached to 0000:00:13.0 00:27:36.275 Attached to 0000:00:12.0 00:27:36.275 0000:00:10.0: get features failed as expected 00:27:36.275 0000:00:11.0: get features failed as expected 00:27:36.275 0000:00:13.0: get features failed as expected 00:27:36.275 0000:00:12.0: get features failed as expected 00:27:36.275 0000:00:10.0: get features successfully as expected 00:27:36.275 0000:00:11.0: get features successfully as expected 00:27:36.275 0000:00:13.0: get features successfully as expected 00:27:36.275 0000:00:12.0: get features successfully as expected 00:27:36.275 0000:00:11.0: read failed as expected 00:27:36.275 0000:00:13.0: read failed as expected 00:27:36.275 0000:00:12.0: read failed as expected 00:27:36.275 0000:00:10.0: read failed as expected 00:27:36.275 0000:00:10.0: read successfully as expected 00:27:36.275 0000:00:11.0: read successfully as expected 00:27:36.275 0000:00:13.0: read successfully as expected 00:27:36.275 0000:00:12.0: read successfully as expected 00:27:36.275 Cleaning up... 00:27:36.275 ************************************ 00:27:36.275 END TEST nvme_err_injection 00:27:36.275 ************************************ 00:27:36.275 00:27:36.275 real 0m0.319s 00:27:36.275 user 0m0.130s 00:27:36.275 sys 0m0.144s 00:27:36.275 07:36:14 nvme.nvme_err_injection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:36.275 07:36:14 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:27:36.275 07:36:14 nvme -- common/autotest_common.sh@1142 -- # return 0 00:27:36.275 07:36:14 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:27:36.275 07:36:14 nvme -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:27:36.275 07:36:14 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:36.275 07:36:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:27:36.275 ************************************ 00:27:36.275 START TEST nvme_overhead 00:27:36.275 ************************************ 00:27:36.275 07:36:14 nvme.nvme_overhead -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:27:37.651 Initializing NVMe Controllers 00:27:37.651 Attached to 0000:00:10.0 00:27:37.651 Attached to 0000:00:11.0 00:27:37.651 Attached to 0000:00:13.0 00:27:37.651 Attached to 0000:00:12.0 00:27:37.651 Initialization complete. Launching workers. 00:27:37.651 submit (in ns) avg, min, max = 17018.7, 13868.6, 108969.1 00:27:37.651 complete (in ns) avg, min, max = 10856.9, 8846.4, 88997.3 00:27:37.651 00:27:37.651 Submit histogram 00:27:37.651 ================ 00:27:37.651 Range in us Cumulative Count 00:27:37.651 13.847 - 13.905: 0.0594% ( 5) 00:27:37.651 13.905 - 13.964: 0.2494% ( 16) 00:27:37.651 13.964 - 14.022: 0.6295% ( 32) 00:27:37.651 14.022 - 14.080: 1.3658% ( 62) 00:27:37.651 14.080 - 14.138: 2.8860% ( 128) 00:27:37.651 14.138 - 14.196: 5.1544% ( 191) 00:27:37.651 14.196 - 14.255: 7.4941% ( 197) 00:27:37.651 14.255 - 14.313: 9.6200% ( 179) 00:27:37.651 14.313 - 14.371: 11.0689% ( 122) 00:27:37.651 14.371 - 14.429: 11.9002% ( 70) 00:27:37.651 14.429 - 14.487: 12.4703% ( 48) 00:27:37.651 14.487 - 14.545: 13.1710% ( 59) 00:27:37.651 14.545 - 14.604: 13.5867% ( 35) 00:27:37.651 14.604 - 14.662: 14.2280% ( 54) 00:27:37.651 14.662 - 14.720: 14.7150% ( 41) 00:27:37.651 14.720 - 14.778: 15.1425% ( 36) 00:27:37.651 14.778 - 14.836: 15.6413% ( 42) 00:27:37.651 14.836 - 14.895: 15.9739% ( 28) 00:27:37.651 14.895 - 15.011: 16.5558% ( 49) 00:27:37.651 15.011 - 15.127: 17.0665% ( 43) 00:27:37.651 15.127 - 15.244: 17.5178% ( 38) 00:27:37.651 15.244 - 15.360: 19.2874% ( 149) 00:27:37.651 15.360 - 15.476: 23.6223% ( 365) 00:27:37.651 15.476 - 15.593: 30.5463% ( 583) 00:27:37.651 15.593 - 15.709: 38.8836% ( 702) 00:27:37.651 15.709 - 15.825: 47.3753% ( 715) 00:27:37.651 15.825 - 15.942: 53.9074% ( 550) 00:27:37.651 15.942 - 16.058: 58.6580% ( 400) 00:27:37.651 16.058 - 16.175: 62.3872% ( 314) 00:27:37.651 16.175 - 16.291: 65.3325% ( 248) 00:27:37.651 16.291 - 16.407: 67.5653% ( 188) 00:27:37.651 16.407 - 16.524: 69.0143% ( 122) 00:27:37.651 16.524 - 16.640: 69.9881% ( 82) 00:27:37.651 16.640 - 16.756: 70.8076% ( 69) 00:27:37.651 16.756 - 16.873: 71.3658% ( 47) 00:27:37.651 16.873 - 16.989: 71.7933% ( 36) 00:27:37.651 16.989 - 17.105: 72.1021% ( 26) 00:27:37.651 17.105 - 17.222: 72.3397% ( 20) 00:27:37.651 17.222 - 17.338: 72.5416% ( 17) 00:27:37.651 17.338 - 17.455: 72.6603% ( 10) 00:27:37.651 17.455 - 17.571: 72.8029% ( 12) 00:27:37.651 17.571 - 17.687: 72.9097% ( 9) 00:27:37.651 17.687 - 17.804: 72.9810% ( 6) 00:27:37.651 17.804 - 17.920: 73.0760% ( 8) 00:27:37.651 17.920 - 18.036: 73.1354% ( 5) 00:27:37.651 18.036 - 18.153: 73.1948% ( 5) 00:27:37.651 18.153 - 18.269: 73.2185% ( 2) 00:27:37.651 18.269 - 18.385: 73.2542% ( 3) 00:27:37.651 18.385 - 18.502: 73.3848% ( 11) 00:27:37.651 18.502 - 18.618: 74.3112% ( 78) 00:27:37.651 18.618 - 18.735: 77.3159% ( 253) 00:27:37.651 18.735 - 18.851: 81.3539% ( 340) 00:27:37.651 18.851 - 18.967: 84.4181% ( 258) 00:27:37.651 18.967 - 19.084: 85.9026% ( 125) 00:27:37.651 19.084 - 19.200: 86.9834% ( 91) 00:27:37.651 19.200 - 19.316: 87.9929% ( 85) 00:27:37.651 19.316 - 19.433: 88.7767% ( 66) 00:27:37.651 19.433 - 19.549: 89.4062% ( 53) 00:27:37.651 19.549 - 19.665: 89.9406% ( 45) 00:27:37.651 19.665 - 19.782: 90.4988% ( 47) 00:27:37.651 19.782 - 19.898: 90.7957% ( 25) 00:27:37.651 19.898 - 20.015: 91.0214% ( 19) 00:27:37.651 20.015 - 20.131: 91.3183% ( 25) 00:27:37.651 20.131 - 20.247: 91.5083% ( 16) 00:27:37.651 20.247 - 20.364: 91.7340% ( 19) 00:27:37.651 20.364 - 20.480: 91.8765% ( 12) 00:27:37.651 20.480 - 20.596: 92.0190% ( 12) 00:27:37.651 20.596 - 20.713: 92.1615% ( 12) 00:27:37.651 20.713 - 20.829: 92.2803% ( 10) 00:27:37.651 20.829 - 20.945: 92.4347% ( 13) 00:27:37.651 20.945 - 21.062: 92.5534% ( 10) 00:27:37.651 21.062 - 21.178: 92.6128% ( 5) 00:27:37.651 21.178 - 21.295: 92.7197% ( 9) 00:27:37.651 21.295 - 21.411: 92.8741% ( 13) 00:27:37.651 21.411 - 21.527: 92.9691% ( 8) 00:27:37.651 21.527 - 21.644: 93.0760% ( 9) 00:27:37.651 21.644 - 21.760: 93.2423% ( 14) 00:27:37.651 21.760 - 21.876: 93.4917% ( 21) 00:27:37.651 21.876 - 21.993: 93.6223% ( 11) 00:27:37.651 21.993 - 22.109: 93.7648% ( 12) 00:27:37.651 22.109 - 22.225: 93.9667% ( 17) 00:27:37.651 22.225 - 22.342: 94.1449% ( 15) 00:27:37.651 22.342 - 22.458: 94.3705% ( 19) 00:27:37.651 22.458 - 22.575: 94.5606% ( 16) 00:27:37.651 22.575 - 22.691: 94.6793% ( 10) 00:27:37.651 22.691 - 22.807: 94.8694% ( 16) 00:27:37.651 22.807 - 22.924: 95.0000% ( 11) 00:27:37.651 22.924 - 23.040: 95.0831% ( 7) 00:27:37.651 23.040 - 23.156: 95.2257% ( 12) 00:27:37.651 23.156 - 23.273: 95.3207% ( 8) 00:27:37.651 23.273 - 23.389: 95.3800% ( 5) 00:27:37.651 23.389 - 23.505: 95.4632% ( 7) 00:27:37.651 23.505 - 23.622: 95.5463% ( 7) 00:27:37.651 23.622 - 23.738: 95.6413% ( 8) 00:27:37.651 23.738 - 23.855: 95.7245% ( 7) 00:27:37.651 23.855 - 23.971: 95.8076% ( 7) 00:27:37.651 23.971 - 24.087: 95.8551% ( 4) 00:27:37.651 24.087 - 24.204: 96.0214% ( 14) 00:27:37.651 24.204 - 24.320: 96.0689% ( 4) 00:27:37.651 24.320 - 24.436: 96.1639% ( 8) 00:27:37.651 24.436 - 24.553: 96.2352% ( 6) 00:27:37.651 24.553 - 24.669: 96.3064% ( 6) 00:27:37.651 24.669 - 24.785: 96.3658% ( 5) 00:27:37.651 24.785 - 24.902: 96.4371% ( 6) 00:27:37.651 24.902 - 25.018: 96.4846% ( 4) 00:27:37.651 25.018 - 25.135: 96.5202% ( 3) 00:27:37.651 25.135 - 25.251: 96.6033% ( 7) 00:27:37.651 25.251 - 25.367: 96.7102% ( 9) 00:27:37.651 25.367 - 25.484: 96.7933% ( 7) 00:27:37.651 25.484 - 25.600: 96.9002% ( 9) 00:27:37.651 25.600 - 25.716: 96.9834% ( 7) 00:27:37.651 25.716 - 25.833: 97.0784% ( 8) 00:27:37.651 25.833 - 25.949: 97.1734% ( 8) 00:27:37.651 25.949 - 26.065: 97.2565% ( 7) 00:27:37.651 26.065 - 26.182: 97.2922% ( 3) 00:27:37.651 26.182 - 26.298: 97.3397% ( 4) 00:27:37.651 26.298 - 26.415: 97.4347% ( 8) 00:27:37.651 26.415 - 26.531: 97.5059% ( 6) 00:27:37.651 26.531 - 26.647: 97.6247% ( 10) 00:27:37.651 26.647 - 26.764: 97.7078% ( 7) 00:27:37.651 26.764 - 26.880: 97.7910% ( 7) 00:27:37.651 26.880 - 26.996: 97.8504% ( 5) 00:27:37.651 26.996 - 27.113: 97.9572% ( 9) 00:27:37.651 27.113 - 27.229: 98.0166% ( 5) 00:27:37.651 27.229 - 27.345: 98.0760% ( 5) 00:27:37.651 27.345 - 27.462: 98.1354% ( 5) 00:27:37.651 27.462 - 27.578: 98.2185% ( 7) 00:27:37.651 27.578 - 27.695: 98.2779% ( 5) 00:27:37.651 27.695 - 27.811: 98.2898% ( 1) 00:27:37.651 27.811 - 27.927: 98.3135% ( 2) 00:27:37.651 27.927 - 28.044: 98.3729% ( 5) 00:27:37.651 28.044 - 28.160: 98.3848% ( 1) 00:27:37.651 28.160 - 28.276: 98.4204% ( 3) 00:27:37.651 28.276 - 28.393: 98.4323% ( 1) 00:27:37.651 28.393 - 28.509: 98.4442% ( 1) 00:27:37.651 28.625 - 28.742: 98.4917% ( 4) 00:27:37.651 28.742 - 28.858: 98.5511% ( 5) 00:27:37.651 28.858 - 28.975: 98.5986% ( 4) 00:27:37.651 28.975 - 29.091: 98.6105% ( 1) 00:27:37.651 29.091 - 29.207: 98.6223% ( 1) 00:27:37.651 29.207 - 29.324: 98.6698% ( 4) 00:27:37.651 29.324 - 29.440: 98.7173% ( 4) 00:27:37.651 29.440 - 29.556: 98.7648% ( 4) 00:27:37.651 29.556 - 29.673: 98.8005% ( 3) 00:27:37.651 29.673 - 29.789: 98.8242% ( 2) 00:27:37.651 29.789 - 30.022: 98.8836% ( 5) 00:27:37.651 30.022 - 30.255: 98.9074% ( 2) 00:27:37.651 30.255 - 30.487: 98.9430% ( 3) 00:27:37.651 30.487 - 30.720: 99.0974% ( 13) 00:27:37.651 30.720 - 30.953: 99.1924% ( 8) 00:27:37.651 30.953 - 31.185: 99.2518% ( 5) 00:27:37.651 31.185 - 31.418: 99.2755% ( 2) 00:27:37.651 31.418 - 31.651: 99.3230% ( 4) 00:27:37.651 31.651 - 31.884: 99.4062% ( 7) 00:27:37.651 31.884 - 32.116: 99.4418% ( 3) 00:27:37.651 32.116 - 32.349: 99.4893% ( 4) 00:27:37.651 32.349 - 32.582: 99.5249% ( 3) 00:27:37.651 32.582 - 32.815: 99.5368% ( 1) 00:27:37.651 32.815 - 33.047: 99.5724% ( 3) 00:27:37.651 33.513 - 33.745: 99.6200% ( 4) 00:27:37.651 33.745 - 33.978: 99.6318% ( 1) 00:27:37.651 34.444 - 34.676: 99.6437% ( 1) 00:27:37.651 34.676 - 34.909: 99.6556% ( 1) 00:27:37.651 34.909 - 35.142: 99.6793% ( 2) 00:27:37.651 35.375 - 35.607: 99.6912% ( 1) 00:27:37.651 36.538 - 36.771: 99.7031% ( 1) 00:27:37.651 36.771 - 37.004: 99.7150% ( 1) 00:27:37.651 37.004 - 37.236: 99.7387% ( 2) 00:27:37.651 37.702 - 37.935: 99.7506% ( 1) 00:27:37.651 38.400 - 38.633: 99.7625% ( 1) 00:27:37.651 39.331 - 39.564: 99.7743% ( 1) 00:27:37.651 40.262 - 40.495: 99.7862% ( 1) 00:27:37.651 40.495 - 40.727: 99.7981% ( 1) 00:27:37.651 42.356 - 42.589: 99.8100% ( 1) 00:27:37.651 42.589 - 42.822: 99.8337% ( 2) 00:27:37.651 43.055 - 43.287: 99.8456% ( 1) 00:27:37.651 43.985 - 44.218: 99.8575% ( 1) 00:27:37.651 44.451 - 44.684: 99.8694% ( 1) 00:27:37.651 45.615 - 45.847: 99.8812% ( 1) 00:27:37.651 47.011 - 47.244: 99.8931% ( 1) 00:27:37.652 49.338 - 49.571: 99.9050% ( 1) 00:27:37.652 50.502 - 50.735: 99.9169% ( 1) 00:27:37.652 50.735 - 50.967: 99.9287% ( 1) 00:27:37.652 51.200 - 51.433: 99.9406% ( 1) 00:27:37.652 55.855 - 56.087: 99.9525% ( 1) 00:27:37.652 61.440 - 61.905: 99.9644% ( 1) 00:27:37.652 82.385 - 82.851: 99.9762% ( 1) 00:27:37.652 105.658 - 106.124: 99.9881% ( 1) 00:27:37.652 108.916 - 109.382: 100.0000% ( 1) 00:27:37.652 00:27:37.652 Complete histogram 00:27:37.652 ================== 00:27:37.652 Range in us Cumulative Count 00:27:37.652 8.844 - 8.902: 0.0119% ( 1) 00:27:37.652 9.076 - 9.135: 0.0356% ( 2) 00:27:37.652 9.193 - 9.251: 0.0713% ( 3) 00:27:37.652 9.251 - 9.309: 0.0831% ( 1) 00:27:37.652 9.309 - 9.367: 0.1306% ( 4) 00:27:37.652 9.367 - 9.425: 0.2257% ( 8) 00:27:37.652 9.425 - 9.484: 0.3325% ( 9) 00:27:37.652 9.484 - 9.542: 0.7363% ( 34) 00:27:37.652 9.542 - 9.600: 3.0523% ( 195) 00:27:37.652 9.600 - 9.658: 9.9050% ( 577) 00:27:37.652 9.658 - 9.716: 20.6651% ( 906) 00:27:37.652 9.716 - 9.775: 32.1021% ( 963) 00:27:37.652 9.775 - 9.833: 41.0095% ( 750) 00:27:37.652 9.833 - 9.891: 47.3159% ( 531) 00:27:37.652 9.891 - 9.949: 53.1116% ( 488) 00:27:37.652 9.949 - 10.007: 57.4941% ( 369) 00:27:37.652 10.007 - 10.065: 60.7601% ( 275) 00:27:37.652 10.065 - 10.124: 62.6366% ( 158) 00:27:37.652 10.124 - 10.182: 63.8242% ( 100) 00:27:37.652 10.182 - 10.240: 64.4893% ( 56) 00:27:37.652 10.240 - 10.298: 65.3207% ( 70) 00:27:37.652 10.298 - 10.356: 66.2589% ( 79) 00:27:37.652 10.356 - 10.415: 67.1615% ( 76) 00:27:37.652 10.415 - 10.473: 68.1591% ( 84) 00:27:37.652 10.473 - 10.531: 69.1330% ( 82) 00:27:37.652 10.531 - 10.589: 70.0356% ( 76) 00:27:37.652 10.589 - 10.647: 70.6413% ( 51) 00:27:37.652 10.647 - 10.705: 71.3183% ( 57) 00:27:37.652 10.705 - 10.764: 71.8171% ( 42) 00:27:37.652 10.764 - 10.822: 72.1259% ( 26) 00:27:37.652 10.822 - 10.880: 72.3990% ( 23) 00:27:37.652 10.880 - 10.938: 72.6722% ( 23) 00:27:37.652 10.938 - 10.996: 72.9097% ( 20) 00:27:37.652 10.996 - 11.055: 73.0285% ( 10) 00:27:37.652 11.055 - 11.113: 73.1235% ( 8) 00:27:37.652 11.113 - 11.171: 73.2779% ( 13) 00:27:37.652 11.171 - 11.229: 73.3848% ( 9) 00:27:37.652 11.229 - 11.287: 73.4798% ( 8) 00:27:37.652 11.287 - 11.345: 73.5629% ( 7) 00:27:37.652 11.345 - 11.404: 73.6698% ( 9) 00:27:37.652 11.404 - 11.462: 73.7648% ( 8) 00:27:37.652 11.462 - 11.520: 73.8361% ( 6) 00:27:37.652 11.520 - 11.578: 73.9430% ( 9) 00:27:37.652 11.578 - 11.636: 74.0024% ( 5) 00:27:37.652 11.636 - 11.695: 74.0380% ( 3) 00:27:37.652 11.695 - 11.753: 74.0855% ( 4) 00:27:37.652 11.753 - 11.811: 74.2399% ( 13) 00:27:37.652 11.811 - 11.869: 74.9762% ( 62) 00:27:37.652 11.869 - 11.927: 76.6865% ( 144) 00:27:37.652 11.927 - 11.985: 80.0356% ( 282) 00:27:37.652 11.985 - 12.044: 84.0736% ( 340) 00:27:37.652 12.044 - 12.102: 86.8765% ( 236) 00:27:37.652 12.102 - 12.160: 88.3135% ( 121) 00:27:37.652 12.160 - 12.218: 89.3468% ( 87) 00:27:37.652 12.218 - 12.276: 89.8694% ( 44) 00:27:37.652 12.276 - 12.335: 90.1544% ( 24) 00:27:37.652 12.335 - 12.393: 90.2969% ( 12) 00:27:37.652 12.393 - 12.451: 90.4869% ( 16) 00:27:37.652 12.451 - 12.509: 90.6532% ( 14) 00:27:37.652 12.509 - 12.567: 90.9382% ( 24) 00:27:37.652 12.567 - 12.625: 91.2945% ( 30) 00:27:37.652 12.625 - 12.684: 91.6033% ( 26) 00:27:37.652 12.684 - 12.742: 91.7933% ( 16) 00:27:37.652 12.742 - 12.800: 92.0903% ( 25) 00:27:37.652 12.800 - 12.858: 92.3990% ( 26) 00:27:37.652 12.858 - 12.916: 92.7078% ( 26) 00:27:37.652 12.916 - 12.975: 92.9810% ( 23) 00:27:37.652 12.975 - 13.033: 93.2660% ( 24) 00:27:37.652 13.033 - 13.091: 93.5392% ( 23) 00:27:37.652 13.091 - 13.149: 93.7411% ( 17) 00:27:37.652 13.149 - 13.207: 93.9311% ( 16) 00:27:37.652 13.207 - 13.265: 94.1093% ( 15) 00:27:37.652 13.265 - 13.324: 94.2043% ( 8) 00:27:37.652 13.324 - 13.382: 94.2280% ( 2) 00:27:37.652 13.382 - 13.440: 94.2755% ( 4) 00:27:37.652 13.440 - 13.498: 94.3468% ( 6) 00:27:37.652 13.498 - 13.556: 94.4181% ( 6) 00:27:37.652 13.556 - 13.615: 94.4299% ( 1) 00:27:37.652 13.615 - 13.673: 94.4774% ( 4) 00:27:37.652 13.673 - 13.731: 94.5012% ( 2) 00:27:37.652 13.731 - 13.789: 94.5724% ( 6) 00:27:37.652 13.789 - 13.847: 94.6437% ( 6) 00:27:37.652 13.847 - 13.905: 94.7506% ( 9) 00:27:37.652 13.905 - 13.964: 94.7981% ( 4) 00:27:37.652 13.964 - 14.022: 94.8100% ( 1) 00:27:37.652 14.022 - 14.080: 94.8219% ( 1) 00:27:37.652 14.138 - 14.196: 94.8694% ( 4) 00:27:37.652 14.196 - 14.255: 94.9050% ( 3) 00:27:37.652 14.255 - 14.313: 94.9406% ( 3) 00:27:37.652 14.313 - 14.371: 94.9762% ( 3) 00:27:37.652 14.371 - 14.429: 95.0475% ( 6) 00:27:37.652 14.429 - 14.487: 95.0831% ( 3) 00:27:37.652 14.487 - 14.545: 95.1425% ( 5) 00:27:37.652 14.545 - 14.604: 95.1663% ( 2) 00:27:37.652 14.604 - 14.662: 95.2257% ( 5) 00:27:37.652 14.662 - 14.720: 95.2850% ( 5) 00:27:37.652 14.720 - 14.778: 95.3207% ( 3) 00:27:37.652 14.778 - 14.836: 95.4038% ( 7) 00:27:37.652 14.836 - 14.895: 95.4394% ( 3) 00:27:37.652 14.895 - 15.011: 95.5582% ( 10) 00:27:37.652 15.011 - 15.127: 95.7363% ( 15) 00:27:37.652 15.127 - 15.244: 95.8314% ( 8) 00:27:37.652 15.244 - 15.360: 95.9739% ( 12) 00:27:37.652 15.360 - 15.476: 96.0333% ( 5) 00:27:37.652 15.476 - 15.593: 96.0926% ( 5) 00:27:37.652 15.593 - 15.709: 96.1876% ( 8) 00:27:37.652 15.709 - 15.825: 96.2470% ( 5) 00:27:37.652 15.825 - 15.942: 96.3183% ( 6) 00:27:37.652 15.942 - 16.058: 96.4371% ( 10) 00:27:37.652 16.058 - 16.175: 96.5321% ( 8) 00:27:37.652 16.175 - 16.291: 96.6746% ( 12) 00:27:37.652 16.291 - 16.407: 96.7577% ( 7) 00:27:37.652 16.407 - 16.524: 96.7933% ( 3) 00:27:37.652 16.524 - 16.640: 96.8884% ( 8) 00:27:37.652 16.640 - 16.756: 96.9715% ( 7) 00:27:37.652 16.756 - 16.873: 97.1140% ( 12) 00:27:37.652 16.873 - 16.989: 97.1971% ( 7) 00:27:37.652 16.989 - 17.105: 97.2565% ( 5) 00:27:37.652 17.105 - 17.222: 97.3040% ( 4) 00:27:37.652 17.222 - 17.338: 97.4466% ( 12) 00:27:37.652 17.338 - 17.455: 97.5297% ( 7) 00:27:37.652 17.455 - 17.571: 97.5653% ( 3) 00:27:37.652 17.571 - 17.687: 97.6128% ( 4) 00:27:37.652 17.687 - 17.804: 97.6722% ( 5) 00:27:37.652 17.804 - 17.920: 97.7197% ( 4) 00:27:37.652 17.920 - 18.036: 97.8029% ( 7) 00:27:37.652 18.036 - 18.153: 97.8979% ( 8) 00:27:37.652 18.153 - 18.269: 97.9929% ( 8) 00:27:37.652 18.269 - 18.385: 98.0285% ( 3) 00:27:37.652 18.385 - 18.502: 98.1116% ( 7) 00:27:37.652 18.502 - 18.618: 98.1591% ( 4) 00:27:37.652 18.618 - 18.735: 98.2423% ( 7) 00:27:37.652 18.735 - 18.851: 98.2779% ( 3) 00:27:37.652 18.967 - 19.084: 98.3135% ( 3) 00:27:37.652 19.084 - 19.200: 98.3492% ( 3) 00:27:37.652 19.200 - 19.316: 98.4086% ( 5) 00:27:37.652 19.316 - 19.433: 98.4323% ( 2) 00:27:37.652 19.433 - 19.549: 98.4442% ( 1) 00:27:37.652 19.549 - 19.665: 98.4679% ( 2) 00:27:37.652 19.665 - 19.782: 98.5392% ( 6) 00:27:37.652 19.782 - 19.898: 98.5748% ( 3) 00:27:37.652 20.015 - 20.131: 98.6580% ( 7) 00:27:37.652 20.131 - 20.247: 98.6817% ( 2) 00:27:37.652 20.247 - 20.364: 98.7411% ( 5) 00:27:37.652 20.364 - 20.480: 98.7648% ( 2) 00:27:37.652 20.480 - 20.596: 98.8005% ( 3) 00:27:37.652 20.596 - 20.713: 98.8480% ( 4) 00:27:37.652 20.713 - 20.829: 98.8836% ( 3) 00:27:37.652 20.829 - 20.945: 98.9311% ( 4) 00:27:37.652 20.945 - 21.062: 98.9549% ( 2) 00:27:37.652 21.178 - 21.295: 98.9786% ( 2) 00:27:37.652 21.295 - 21.411: 99.0261% ( 4) 00:27:37.652 21.411 - 21.527: 99.0499% ( 2) 00:27:37.652 21.527 - 21.644: 99.0855% ( 3) 00:27:37.652 21.644 - 21.760: 99.1093% ( 2) 00:27:37.652 21.760 - 21.876: 99.1568% ( 4) 00:27:37.652 21.993 - 22.109: 99.1805% ( 2) 00:27:37.652 22.109 - 22.225: 99.2043% ( 2) 00:27:37.652 22.225 - 22.342: 99.2280% ( 2) 00:27:37.652 22.342 - 22.458: 99.2518% ( 2) 00:27:37.652 22.458 - 22.575: 99.2874% ( 3) 00:27:37.652 22.691 - 22.807: 99.3112% ( 2) 00:27:37.652 22.807 - 22.924: 99.3230% ( 1) 00:27:37.652 22.924 - 23.040: 99.3587% ( 3) 00:27:37.652 23.040 - 23.156: 99.3824% ( 2) 00:27:37.652 23.156 - 23.273: 99.4062% ( 2) 00:27:37.652 23.273 - 23.389: 99.4537% ( 4) 00:27:37.652 23.505 - 23.622: 99.4774% ( 2) 00:27:37.652 23.738 - 23.855: 99.5012% ( 2) 00:27:37.652 23.855 - 23.971: 99.5131% ( 1) 00:27:37.652 24.087 - 24.204: 99.5368% ( 2) 00:27:37.652 24.204 - 24.320: 99.5606% ( 2) 00:27:37.652 24.436 - 24.553: 99.5843% ( 2) 00:27:37.652 24.553 - 24.669: 99.5962% ( 1) 00:27:37.652 24.669 - 24.785: 99.6200% ( 2) 00:27:37.652 24.785 - 24.902: 99.6556% ( 3) 00:27:37.652 25.018 - 25.135: 99.6912% ( 3) 00:27:37.652 25.135 - 25.251: 99.7031% ( 1) 00:27:37.652 25.251 - 25.367: 99.7268% ( 2) 00:27:37.652 25.367 - 25.484: 99.7506% ( 2) 00:27:37.652 25.716 - 25.833: 99.7625% ( 1) 00:27:37.652 25.833 - 25.949: 99.7862% ( 2) 00:27:37.652 25.949 - 26.065: 99.8100% ( 2) 00:27:37.652 26.182 - 26.298: 99.8219% ( 1) 00:27:37.652 26.880 - 26.996: 99.8337% ( 1) 00:27:37.653 27.578 - 27.695: 99.8456% ( 1) 00:27:37.653 27.695 - 27.811: 99.8575% ( 1) 00:27:37.653 27.811 - 27.927: 99.8694% ( 1) 00:27:37.653 31.418 - 31.651: 99.8812% ( 1) 00:27:37.653 31.884 - 32.116: 99.8931% ( 1) 00:27:37.653 32.349 - 32.582: 99.9050% ( 1) 00:27:37.653 33.280 - 33.513: 99.9169% ( 1) 00:27:37.653 33.978 - 34.211: 99.9287% ( 1) 00:27:37.653 34.909 - 35.142: 99.9406% ( 1) 00:27:37.653 38.633 - 38.865: 99.9525% ( 1) 00:27:37.653 70.284 - 70.749: 99.9644% ( 1) 00:27:37.653 82.851 - 83.316: 99.9762% ( 1) 00:27:37.653 84.247 - 84.713: 99.9881% ( 1) 00:27:37.653 88.902 - 89.367: 100.0000% ( 1) 00:27:37.653 00:27:37.653 00:27:37.653 real 0m1.308s 00:27:37.653 user 0m1.107s 00:27:37.653 sys 0m0.151s 00:27:37.653 07:36:16 nvme.nvme_overhead -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:37.653 07:36:16 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:27:37.653 ************************************ 00:27:37.653 END TEST nvme_overhead 00:27:37.653 ************************************ 00:27:37.653 07:36:16 nvme -- common/autotest_common.sh@1142 -- # return 0 00:27:37.653 07:36:16 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:27:37.653 07:36:16 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:27:37.653 07:36:16 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:37.653 07:36:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:27:37.653 ************************************ 00:27:37.653 START TEST nvme_arbitration 00:27:37.653 ************************************ 00:27:37.653 07:36:16 nvme.nvme_arbitration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:27:41.836 Initializing NVMe Controllers 00:27:41.836 Attached to 0000:00:10.0 00:27:41.836 Attached to 0000:00:11.0 00:27:41.836 Attached to 0000:00:13.0 00:27:41.836 Attached to 0000:00:12.0 00:27:41.836 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:27:41.836 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:27:41.836 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:27:41.836 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:27:41.836 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:27:41.836 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:27:41.836 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:27:41.836 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:27:41.836 Initialization complete. Launching workers. 00:27:41.836 Starting thread on core 1 with urgent priority queue 00:27:41.836 Starting thread on core 2 with urgent priority queue 00:27:41.836 Starting thread on core 3 with urgent priority queue 00:27:41.836 Starting thread on core 0 with urgent priority queue 00:27:41.836 QEMU NVMe Ctrl (12340 ) core 0: 554.67 IO/s 180.29 secs/100000 ios 00:27:41.836 QEMU NVMe Ctrl (12342 ) core 0: 554.67 IO/s 180.29 secs/100000 ios 00:27:41.836 QEMU NVMe Ctrl (12341 ) core 1: 469.33 IO/s 213.07 secs/100000 ios 00:27:41.836 QEMU NVMe Ctrl (12342 ) core 1: 469.33 IO/s 213.07 secs/100000 ios 00:27:41.836 QEMU NVMe Ctrl (12343 ) core 2: 682.67 IO/s 146.48 secs/100000 ios 00:27:41.836 QEMU NVMe Ctrl (12342 ) core 3: 704.00 IO/s 142.05 secs/100000 ios 00:27:41.836 ======================================================== 00:27:41.836 00:27:41.836 00:27:41.836 real 0m3.426s 00:27:41.836 user 0m9.431s 00:27:41.836 sys 0m0.156s 00:27:41.836 ************************************ 00:27:41.836 END TEST nvme_arbitration 00:27:41.836 ************************************ 00:27:41.836 07:36:19 nvme.nvme_arbitration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:41.836 07:36:19 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:27:41.836 07:36:19 nvme -- common/autotest_common.sh@1142 -- # return 0 00:27:41.836 07:36:19 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:27:41.836 07:36:19 nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:27:41.836 07:36:19 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:41.836 07:36:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:27:41.836 ************************************ 00:27:41.836 START TEST nvme_single_aen 00:27:41.836 ************************************ 00:27:41.836 07:36:19 nvme.nvme_single_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:27:41.836 Asynchronous Event Request test 00:27:41.836 Attached to 0000:00:10.0 00:27:41.836 Attached to 0000:00:11.0 00:27:41.836 Attached to 0000:00:13.0 00:27:41.837 Attached to 0000:00:12.0 00:27:41.837 Reset controller to setup AER completions for this process 00:27:41.837 Registering asynchronous event callbacks... 00:27:41.837 Getting orig temperature thresholds of all controllers 00:27:41.837 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:27:41.837 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:27:41.837 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:27:41.837 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:27:41.837 Setting all controllers temperature threshold low to trigger AER 00:27:41.837 Waiting for all controllers temperature threshold to be set lower 00:27:41.837 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:27:41.837 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:27:41.837 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:27:41.837 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:27:41.837 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:27:41.837 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:27:41.837 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:27:41.837 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:27:41.837 Waiting for all controllers to trigger AER and reset threshold 00:27:41.837 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:27:41.837 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:27:41.837 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:27:41.837 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:27:41.837 Cleaning up... 00:27:41.837 00:27:41.837 real 0m0.268s 00:27:41.837 user 0m0.094s 00:27:41.837 sys 0m0.134s 00:27:41.837 ************************************ 00:27:41.837 END TEST nvme_single_aen 00:27:41.837 ************************************ 00:27:41.837 07:36:19 nvme.nvme_single_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:41.837 07:36:19 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:27:41.837 07:36:19 nvme -- common/autotest_common.sh@1142 -- # return 0 00:27:41.837 07:36:19 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:27:41.837 07:36:19 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:41.837 07:36:19 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:41.837 07:36:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:27:41.837 ************************************ 00:27:41.837 START TEST nvme_doorbell_aers 00:27:41.837 ************************************ 00:27:41.837 07:36:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1123 -- # nvme_doorbell_aers 00:27:41.837 07:36:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:27:41.837 07:36:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:27:41.837 07:36:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:27:41.837 07:36:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:27:41.837 07:36:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:27:41.837 07:36:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:27:41.837 07:36:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:41.837 07:36:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:41.837 07:36:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:27:41.837 07:36:20 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:27:41.837 07:36:20 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:27:41.837 07:36:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:27:41.837 07:36:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:27:41.837 [2024-07-15 07:36:20.306984] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70211) is not found. Dropping the request. 00:27:51.803 Executing: test_write_invalid_db 00:27:51.803 Waiting for AER completion... 00:27:51.803 Failure: test_write_invalid_db 00:27:51.803 00:27:51.803 Executing: test_invalid_db_write_overflow_sq 00:27:51.803 Waiting for AER completion... 00:27:51.803 Failure: test_invalid_db_write_overflow_sq 00:27:51.803 00:27:51.803 Executing: test_invalid_db_write_overflow_cq 00:27:51.803 Waiting for AER completion... 00:27:51.803 Failure: test_invalid_db_write_overflow_cq 00:27:51.803 00:27:51.803 07:36:30 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:27:51.803 07:36:30 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:27:51.803 [2024-07-15 07:36:30.360602] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70211) is not found. Dropping the request. 00:28:01.784 Executing: test_write_invalid_db 00:28:01.784 Waiting for AER completion... 00:28:01.784 Failure: test_write_invalid_db 00:28:01.784 00:28:01.784 Executing: test_invalid_db_write_overflow_sq 00:28:01.784 Waiting for AER completion... 00:28:01.784 Failure: test_invalid_db_write_overflow_sq 00:28:01.784 00:28:01.784 Executing: test_invalid_db_write_overflow_cq 00:28:01.784 Waiting for AER completion... 00:28:01.784 Failure: test_invalid_db_write_overflow_cq 00:28:01.784 00:28:01.784 07:36:40 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:28:01.784 07:36:40 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:28:02.042 [2024-07-15 07:36:40.402554] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70211) is not found. Dropping the request. 00:28:12.086 Executing: test_write_invalid_db 00:28:12.086 Waiting for AER completion... 00:28:12.086 Failure: test_write_invalid_db 00:28:12.086 00:28:12.086 Executing: test_invalid_db_write_overflow_sq 00:28:12.086 Waiting for AER completion... 00:28:12.086 Failure: test_invalid_db_write_overflow_sq 00:28:12.086 00:28:12.086 Executing: test_invalid_db_write_overflow_cq 00:28:12.086 Waiting for AER completion... 00:28:12.086 Failure: test_invalid_db_write_overflow_cq 00:28:12.086 00:28:12.086 07:36:50 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:28:12.086 07:36:50 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:28:12.086 [2024-07-15 07:36:50.508769] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70211) is not found. Dropping the request. 00:28:22.111 Executing: test_write_invalid_db 00:28:22.111 Waiting for AER completion... 00:28:22.111 Failure: test_write_invalid_db 00:28:22.111 00:28:22.111 Executing: test_invalid_db_write_overflow_sq 00:28:22.111 Waiting for AER completion... 00:28:22.111 Failure: test_invalid_db_write_overflow_sq 00:28:22.111 00:28:22.111 Executing: test_invalid_db_write_overflow_cq 00:28:22.111 Waiting for AER completion... 00:28:22.111 Failure: test_invalid_db_write_overflow_cq 00:28:22.111 00:28:22.111 ************************************ 00:28:22.111 END TEST nvme_doorbell_aers 00:28:22.111 ************************************ 00:28:22.111 00:28:22.111 real 0m40.275s 00:28:22.111 user 0m34.341s 00:28:22.111 sys 0m5.512s 00:28:22.111 07:37:00 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:22.111 07:37:00 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:28:22.111 07:37:00 nvme -- common/autotest_common.sh@1142 -- # return 0 00:28:22.111 07:37:00 nvme -- nvme/nvme.sh@97 -- # uname 00:28:22.111 07:37:00 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:28:22.111 07:37:00 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:28:22.111 07:37:00 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:28:22.111 07:37:00 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:22.111 07:37:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:28:22.111 ************************************ 00:28:22.111 START TEST nvme_multi_aen 00:28:22.111 ************************************ 00:28:22.111 07:37:00 nvme.nvme_multi_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:28:22.111 [2024-07-15 07:37:00.530009] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70211) is not found. Dropping the request. 00:28:22.111 [2024-07-15 07:37:00.530415] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70211) is not found. Dropping the request. 00:28:22.111 [2024-07-15 07:37:00.530755] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70211) is not found. Dropping the request. 00:28:22.111 [2024-07-15 07:37:00.532745] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70211) is not found. Dropping the request. 00:28:22.111 [2024-07-15 07:37:00.532955] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70211) is not found. Dropping the request. 00:28:22.111 [2024-07-15 07:37:00.533145] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70211) is not found. Dropping the request. 00:28:22.111 [2024-07-15 07:37:00.535005] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70211) is not found. Dropping the request. 00:28:22.111 [2024-07-15 07:37:00.535208] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70211) is not found. Dropping the request. 00:28:22.111 [2024-07-15 07:37:00.535439] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70211) is not found. Dropping the request. 00:28:22.111 [2024-07-15 07:37:00.537050] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70211) is not found. Dropping the request. 00:28:22.111 [2024-07-15 07:37:00.537243] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70211) is not found. Dropping the request. 00:28:22.111 [2024-07-15 07:37:00.537405] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70211) is not found. Dropping the request. 00:28:22.111 Child process pid: 70728 00:28:22.369 [Child] Asynchronous Event Request test 00:28:22.369 [Child] Attached to 0000:00:10.0 00:28:22.369 [Child] Attached to 0000:00:11.0 00:28:22.369 [Child] Attached to 0000:00:13.0 00:28:22.369 [Child] Attached to 0000:00:12.0 00:28:22.369 [Child] Registering asynchronous event callbacks... 00:28:22.369 [Child] Getting orig temperature thresholds of all controllers 00:28:22.369 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:28:22.369 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:28:22.369 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:28:22.369 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:28:22.369 [Child] Waiting for all controllers to trigger AER and reset threshold 00:28:22.369 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:28:22.369 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:28:22.369 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:28:22.369 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:28:22.369 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:28:22.369 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:28:22.369 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:28:22.369 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:28:22.369 [Child] Cleaning up... 00:28:22.369 Asynchronous Event Request test 00:28:22.369 Attached to 0000:00:10.0 00:28:22.369 Attached to 0000:00:11.0 00:28:22.369 Attached to 0000:00:13.0 00:28:22.369 Attached to 0000:00:12.0 00:28:22.369 Reset controller to setup AER completions for this process 00:28:22.369 Registering asynchronous event callbacks... 00:28:22.369 Getting orig temperature thresholds of all controllers 00:28:22.369 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:28:22.369 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:28:22.369 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:28:22.369 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:28:22.369 Setting all controllers temperature threshold low to trigger AER 00:28:22.369 Waiting for all controllers temperature threshold to be set lower 00:28:22.369 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:28:22.369 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:28:22.369 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:28:22.369 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:28:22.369 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:28:22.369 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:28:22.369 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:28:22.369 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:28:22.369 Waiting for all controllers to trigger AER and reset threshold 00:28:22.369 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:28:22.369 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:28:22.369 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:28:22.369 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:28:22.369 Cleaning up... 00:28:22.369 00:28:22.369 real 0m0.573s 00:28:22.369 user 0m0.197s 00:28:22.369 sys 0m0.266s 00:28:22.369 07:37:00 nvme.nvme_multi_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:22.369 07:37:00 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:28:22.369 ************************************ 00:28:22.369 END TEST nvme_multi_aen 00:28:22.369 ************************************ 00:28:22.369 07:37:00 nvme -- common/autotest_common.sh@1142 -- # return 0 00:28:22.369 07:37:00 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:28:22.369 07:37:00 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:28:22.369 07:37:00 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:22.369 07:37:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:28:22.369 ************************************ 00:28:22.369 START TEST nvme_startup 00:28:22.369 ************************************ 00:28:22.369 07:37:00 nvme.nvme_startup -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:28:22.627 Initializing NVMe Controllers 00:28:22.627 Attached to 0000:00:10.0 00:28:22.627 Attached to 0000:00:11.0 00:28:22.627 Attached to 0000:00:13.0 00:28:22.627 Attached to 0000:00:12.0 00:28:22.627 Initialization complete. 00:28:22.627 Time used:206568.703 (us). 00:28:22.627 00:28:22.627 real 0m0.315s 00:28:22.627 user 0m0.108s 00:28:22.627 sys 0m0.158s 00:28:22.627 07:37:01 nvme.nvme_startup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:22.627 07:37:01 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:28:22.627 ************************************ 00:28:22.627 END TEST nvme_startup 00:28:22.627 ************************************ 00:28:22.885 07:37:01 nvme -- common/autotest_common.sh@1142 -- # return 0 00:28:22.885 07:37:01 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:28:22.885 07:37:01 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:22.885 07:37:01 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:22.885 07:37:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:28:22.885 ************************************ 00:28:22.885 START TEST nvme_multi_secondary 00:28:22.885 ************************************ 00:28:22.885 07:37:01 nvme.nvme_multi_secondary -- common/autotest_common.sh@1123 -- # nvme_multi_secondary 00:28:22.885 07:37:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=70783 00:28:22.885 07:37:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:28:22.885 07:37:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=70784 00:28:22.885 07:37:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:28:22.885 07:37:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:28:26.171 Initializing NVMe Controllers 00:28:26.171 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:28:26.171 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:28:26.171 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:28:26.171 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:28:26.171 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:28:26.171 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:28:26.171 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:28:26.171 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:28:26.171 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:28:26.171 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:28:26.171 Initialization complete. Launching workers. 00:28:26.171 ======================================================== 00:28:26.171 Latency(us) 00:28:26.171 Device Information : IOPS MiB/s Average min max 00:28:26.171 PCIE (0000:00:10.0) NSID 1 from core 2: 2294.16 8.96 6972.29 1166.16 18987.36 00:28:26.171 PCIE (0000:00:11.0) NSID 1 from core 2: 2294.16 8.96 6982.88 1140.82 18957.17 00:28:26.171 PCIE (0000:00:13.0) NSID 1 from core 2: 2288.84 8.94 6998.99 1120.81 20098.49 00:28:26.171 PCIE (0000:00:12.0) NSID 1 from core 2: 2294.16 8.96 6982.91 1156.37 17294.87 00:28:26.171 PCIE (0000:00:12.0) NSID 2 from core 2: 2294.16 8.96 6983.10 1197.91 19296.37 00:28:26.171 PCIE (0000:00:12.0) NSID 3 from core 2: 2294.16 8.96 6983.19 1183.66 15793.94 00:28:26.171 ======================================================== 00:28:26.171 Total : 13759.65 53.75 6983.89 1120.81 20098.49 00:28:26.171 00:28:26.171 07:37:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 70783 00:28:26.430 Initializing NVMe Controllers 00:28:26.430 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:28:26.430 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:28:26.430 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:28:26.430 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:28:26.430 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:28:26.430 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:28:26.430 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:28:26.430 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:28:26.430 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:28:26.430 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:28:26.430 Initialization complete. Launching workers. 00:28:26.430 ======================================================== 00:28:26.430 Latency(us) 00:28:26.430 Device Information : IOPS MiB/s Average min max 00:28:26.430 PCIE (0000:00:10.0) NSID 1 from core 1: 5082.61 19.85 3146.07 1234.18 7129.69 00:28:26.430 PCIE (0000:00:11.0) NSID 1 from core 1: 5082.61 19.85 3147.62 1278.44 7107.19 00:28:26.431 PCIE (0000:00:13.0) NSID 1 from core 1: 5082.61 19.85 3147.65 1262.35 7568.52 00:28:26.431 PCIE (0000:00:12.0) NSID 1 from core 1: 5087.94 19.87 3144.22 1271.76 8217.27 00:28:26.431 PCIE (0000:00:12.0) NSID 2 from core 1: 5082.61 19.85 3147.48 1255.88 8037.86 00:28:26.431 PCIE (0000:00:12.0) NSID 3 from core 1: 5082.61 19.85 3147.42 1277.76 7704.46 00:28:26.431 ======================================================== 00:28:26.431 Total : 30500.99 119.14 3146.74 1234.18 8217.27 00:28:26.431 00:28:28.329 Initializing NVMe Controllers 00:28:28.329 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:28:28.329 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:28:28.329 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:28:28.329 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:28:28.329 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:28:28.329 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:28:28.329 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:28:28.329 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:28:28.329 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:28:28.329 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:28:28.329 Initialization complete. Launching workers. 00:28:28.329 ======================================================== 00:28:28.329 Latency(us) 00:28:28.329 Device Information : IOPS MiB/s Average min max 00:28:28.329 PCIE (0000:00:10.0) NSID 1 from core 0: 7639.86 29.84 2092.51 948.34 10965.21 00:28:28.329 PCIE (0000:00:11.0) NSID 1 from core 0: 7639.86 29.84 2093.75 979.39 11164.51 00:28:28.329 PCIE (0000:00:13.0) NSID 1 from core 0: 7639.86 29.84 2093.68 977.89 11608.41 00:28:28.329 PCIE (0000:00:12.0) NSID 1 from core 0: 7639.86 29.84 2093.59 976.68 9151.96 00:28:28.329 PCIE (0000:00:12.0) NSID 2 from core 0: 7639.86 29.84 2093.52 992.24 9476.13 00:28:28.329 PCIE (0000:00:12.0) NSID 3 from core 0: 7639.86 29.84 2093.45 910.83 9838.59 00:28:28.329 ======================================================== 00:28:28.329 Total : 45839.18 179.06 2093.42 910.83 11608.41 00:28:28.329 00:28:28.329 07:37:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 70784 00:28:28.329 07:37:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=70853 00:28:28.329 07:37:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:28:28.329 07:37:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=70854 00:28:28.329 07:37:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:28:28.329 07:37:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:28:31.609 Initializing NVMe Controllers 00:28:31.609 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:28:31.609 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:28:31.609 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:28:31.609 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:28:31.609 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:28:31.609 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:28:31.609 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:28:31.609 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:28:31.609 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:28:31.609 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:28:31.609 Initialization complete. Launching workers. 00:28:31.609 ======================================================== 00:28:31.609 Latency(us) 00:28:31.609 Device Information : IOPS MiB/s Average min max 00:28:31.609 PCIE (0000:00:10.0) NSID 1 from core 0: 5525.01 21.58 2893.96 1116.61 10761.51 00:28:31.609 PCIE (0000:00:11.0) NSID 1 from core 0: 5525.01 21.58 2895.35 1116.40 10829.43 00:28:31.609 PCIE (0000:00:13.0) NSID 1 from core 0: 5525.01 21.58 2895.50 1124.28 11242.22 00:28:31.609 PCIE (0000:00:12.0) NSID 1 from core 0: 5530.34 21.60 2892.65 1134.87 9300.94 00:28:31.609 PCIE (0000:00:12.0) NSID 2 from core 0: 5530.34 21.60 2892.59 1133.77 10263.38 00:28:31.609 PCIE (0000:00:12.0) NSID 3 from core 0: 5530.34 21.60 2892.72 1141.19 10323.67 00:28:31.609 ======================================================== 00:28:31.609 Total : 33166.05 129.55 2893.79 1116.40 11242.22 00:28:31.609 00:28:31.609 Initializing NVMe Controllers 00:28:31.609 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:28:31.609 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:28:31.609 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:28:31.609 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:28:31.609 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:28:31.609 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:28:31.609 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:28:31.609 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:28:31.609 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:28:31.609 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:28:31.609 Initialization complete. Launching workers. 00:28:31.609 ======================================================== 00:28:31.609 Latency(us) 00:28:31.609 Device Information : IOPS MiB/s Average min max 00:28:31.609 PCIE (0000:00:10.0) NSID 1 from core 1: 5249.92 20.51 3045.67 1035.28 10148.06 00:28:31.609 PCIE (0000:00:11.0) NSID 1 from core 1: 5249.92 20.51 3047.32 1076.63 9392.23 00:28:31.609 PCIE (0000:00:13.0) NSID 1 from core 1: 5249.92 20.51 3047.27 1050.93 10227.78 00:28:31.609 PCIE (0000:00:12.0) NSID 1 from core 1: 5249.92 20.51 3047.11 1067.47 11017.65 00:28:31.609 PCIE (0000:00:12.0) NSID 2 from core 1: 5249.92 20.51 3046.95 1048.54 11287.66 00:28:31.609 PCIE (0000:00:12.0) NSID 3 from core 1: 5255.25 20.53 3043.75 1079.70 8134.47 00:28:31.609 ======================================================== 00:28:31.609 Total : 31504.84 123.07 3046.34 1035.28 11287.66 00:28:31.609 00:28:33.508 Initializing NVMe Controllers 00:28:33.508 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:28:33.508 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:28:33.508 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:28:33.508 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:28:33.508 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:28:33.508 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:28:33.508 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:28:33.508 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:28:33.508 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:28:33.508 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:28:33.508 Initialization complete. Launching workers. 00:28:33.508 ======================================================== 00:28:33.508 Latency(us) 00:28:33.508 Device Information : IOPS MiB/s Average min max 00:28:33.508 PCIE (0000:00:10.0) NSID 1 from core 2: 3439.92 13.44 4647.62 1103.00 18262.16 00:28:33.508 PCIE (0000:00:11.0) NSID 1 from core 2: 3443.12 13.45 4642.64 1061.26 13673.21 00:28:33.508 PCIE (0000:00:13.0) NSID 1 from core 2: 3439.92 13.44 4646.60 1037.61 16796.60 00:28:33.508 PCIE (0000:00:12.0) NSID 1 from core 2: 3439.92 13.44 4646.00 950.32 16220.71 00:28:33.508 PCIE (0000:00:12.0) NSID 2 from core 2: 3439.92 13.44 4646.33 871.15 14035.19 00:28:33.508 PCIE (0000:00:12.0) NSID 3 from core 2: 3439.92 13.44 4646.43 821.88 14343.52 00:28:33.508 ======================================================== 00:28:33.508 Total : 20642.74 80.64 4645.94 821.88 18262.16 00:28:33.508 00:28:33.765 ************************************ 00:28:33.765 END TEST nvme_multi_secondary 00:28:33.765 ************************************ 00:28:33.765 07:37:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 70853 00:28:33.765 07:37:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 70854 00:28:33.765 00:28:33.765 real 0m10.892s 00:28:33.765 user 0m18.587s 00:28:33.765 sys 0m0.992s 00:28:33.765 07:37:12 nvme.nvme_multi_secondary -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:33.765 07:37:12 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:28:33.765 07:37:12 nvme -- common/autotest_common.sh@1142 -- # return 0 00:28:33.765 07:37:12 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:28:33.765 07:37:12 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:28:33.765 07:37:12 nvme -- common/autotest_common.sh@1087 -- # [[ -e /proc/69785 ]] 00:28:33.765 07:37:12 nvme -- common/autotest_common.sh@1088 -- # kill 69785 00:28:33.765 07:37:12 nvme -- common/autotest_common.sh@1089 -- # wait 69785 00:28:33.765 [2024-07-15 07:37:12.220872] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70726) is not found. Dropping the request. 00:28:33.765 [2024-07-15 07:37:12.221171] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70726) is not found. Dropping the request. 00:28:33.765 [2024-07-15 07:37:12.221209] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70726) is not found. Dropping the request. 00:28:33.765 [2024-07-15 07:37:12.221260] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70726) is not found. Dropping the request. 00:28:33.765 [2024-07-15 07:37:12.224411] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70726) is not found. Dropping the request. 00:28:33.765 [2024-07-15 07:37:12.224498] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70726) is not found. Dropping the request. 00:28:33.765 [2024-07-15 07:37:12.224528] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70726) is not found. Dropping the request. 00:28:33.765 [2024-07-15 07:37:12.224575] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70726) is not found. Dropping the request. 00:28:33.765 [2024-07-15 07:37:12.227579] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70726) is not found. Dropping the request. 00:28:33.765 [2024-07-15 07:37:12.227663] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70726) is not found. Dropping the request. 00:28:33.765 [2024-07-15 07:37:12.227693] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70726) is not found. Dropping the request. 00:28:33.765 [2024-07-15 07:37:12.227718] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70726) is not found. Dropping the request. 00:28:33.765 [2024-07-15 07:37:12.230438] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70726) is not found. Dropping the request. 00:28:33.765 [2024-07-15 07:37:12.230521] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70726) is not found. Dropping the request. 00:28:33.765 [2024-07-15 07:37:12.230542] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70726) is not found. Dropping the request. 00:28:33.765 [2024-07-15 07:37:12.230560] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70726) is not found. Dropping the request. 00:28:34.022 07:37:12 nvme -- common/autotest_common.sh@1091 -- # rm -f /var/run/spdk_stub0 00:28:34.022 07:37:12 nvme -- common/autotest_common.sh@1095 -- # echo 2 00:28:34.022 07:37:12 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:28:34.022 07:37:12 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:34.022 07:37:12 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:34.022 07:37:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:28:34.022 ************************************ 00:28:34.022 START TEST bdev_nvme_reset_stuck_adm_cmd 00:28:34.022 ************************************ 00:28:34.022 07:37:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:28:34.281 * Looking for test storage... 00:28:34.281 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:28:34.281 07:37:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:28:34.281 07:37:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:28:34.281 07:37:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:28:34.281 07:37:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:28:34.281 07:37:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:28:34.281 07:37:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:28:34.281 07:37:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:28:34.281 07:37:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:28:34.281 07:37:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:28:34.281 07:37:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:28:34.281 07:37:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:28:34.281 07:37:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:28:34.281 07:37:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:34.281 07:37:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:28:34.281 07:37:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:28:34.281 07:37:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:28:34.281 07:37:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:28:34.281 07:37:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:28:34.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.281 07:37:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:28:34.281 07:37:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:28:34.281 07:37:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=71012 00:28:34.281 07:37:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:28:34.281 07:37:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 71012 00:28:34.281 07:37:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:28:34.281 07:37:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@829 -- # '[' -z 71012 ']' 00:28:34.281 07:37:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.281 07:37:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:34.281 07:37:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.281 07:37:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:34.281 07:37:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:28:34.281 [2024-07-15 07:37:12.891547] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:28:34.281 [2024-07-15 07:37:12.892358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71012 ] 00:28:34.539 [2024-07-15 07:37:13.123966] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:35.105 [2024-07-15 07:37:13.428801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.105 [2024-07-15 07:37:13.428872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:35.105 [2024-07-15 07:37:13.429001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.105 [2024-07-15 07:37:13.429016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:36.146 07:37:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:36.146 07:37:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # return 0 00:28:36.146 07:37:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:28:36.146 07:37:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.146 07:37:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:28:36.146 nvme0n1 00:28:36.146 07:37:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.146 07:37:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:28:36.146 07:37:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_TKx8h.txt 00:28:36.146 07:37:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:28:36.146 07:37:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.146 07:37:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:28:36.146 true 00:28:36.146 07:37:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.146 07:37:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:28:36.146 07:37:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721029034 00:28:36.146 07:37:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:28:36.146 07:37:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=71036 00:28:36.146 07:37:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:28:36.146 07:37:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:28:38.046 [2024-07-15 07:37:16.442830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:28:38.046 [2024-07-15 07:37:16.443230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:38.046 [2024-07-15 07:37:16.443274] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:28:38.046 [2024-07-15 07:37:16.443300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.046 [2024-07-15 07:37:16.445578] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 71036 00:28:38.046 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 71036 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 71036 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_TKx8h.txt 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_TKx8h.txt 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 71012 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@948 -- # '[' -z 71012 ']' 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # kill -0 71012 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # uname 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71012 00:28:38.046 killing process with pid 71012 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71012' 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@967 -- # kill 71012 00:28:38.046 07:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # wait 71012 00:28:40.583 07:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:28:40.583 07:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:28:40.583 00:28:40.583 real 0m6.555s 00:28:40.583 user 0m22.042s 00:28:40.583 sys 0m0.817s 00:28:40.583 ************************************ 00:28:40.583 END TEST bdev_nvme_reset_stuck_adm_cmd 00:28:40.583 ************************************ 00:28:40.583 07:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:40.583 07:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:28:40.583 07:37:19 nvme -- common/autotest_common.sh@1142 -- # return 0 00:28:40.583 07:37:19 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:28:40.583 07:37:19 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:28:40.583 07:37:19 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:40.583 07:37:19 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:40.583 07:37:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:28:40.841 ************************************ 00:28:40.841 START TEST nvme_fio 00:28:40.841 ************************************ 00:28:40.841 07:37:19 nvme.nvme_fio -- common/autotest_common.sh@1123 -- # nvme_fio_test 00:28:40.841 07:37:19 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:28:40.841 07:37:19 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:28:40.841 07:37:19 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:28:40.841 07:37:19 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:28:40.841 07:37:19 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:28:40.841 07:37:19 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:40.841 07:37:19 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:28:40.841 07:37:19 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:28:40.841 07:37:19 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:28:40.841 07:37:19 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:28:40.841 07:37:19 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:28:40.841 07:37:19 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:28:40.841 07:37:19 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:28:40.841 07:37:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:28:40.841 07:37:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:28:41.100 07:37:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:28:41.100 07:37:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:28:41.360 07:37:19 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:28:41.360 07:37:19 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:28:41.360 07:37:19 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:28:41.360 07:37:19 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:41.360 07:37:19 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:41.360 07:37:19 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:41.360 07:37:19 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:28:41.360 07:37:19 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:28:41.360 07:37:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:41.360 07:37:19 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:41.360 07:37:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:28:41.360 07:37:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:41.360 07:37:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:28:41.360 07:37:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:41.360 07:37:19 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:41.360 07:37:19 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:28:41.360 07:37:19 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:28:41.360 07:37:19 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:28:41.624 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:41.624 fio-3.35 00:28:41.624 Starting 1 thread 00:28:44.903 00:28:44.903 test: (groupid=0, jobs=1): err= 0: pid=71187: Mon Jul 15 07:37:23 2024 00:28:44.903 read: IOPS=16.4k, BW=64.0MiB/s (67.2MB/s)(128MiB/2001msec) 00:28:44.903 slat (usec): min=5, max=189, avg= 6.88, stdev= 2.56 00:28:44.903 clat (usec): min=254, max=8548, avg=3882.40, stdev=514.60 00:28:44.903 lat (usec): min=260, max=8559, avg=3889.29, stdev=515.34 00:28:44.903 clat percentiles (usec): 00:28:44.903 | 1.00th=[ 3359], 5.00th=[ 3523], 10.00th=[ 3589], 20.00th=[ 3621], 00:28:44.903 | 30.00th=[ 3687], 40.00th=[ 3752], 50.00th=[ 3785], 60.00th=[ 3851], 00:28:44.903 | 70.00th=[ 3884], 80.00th=[ 3949], 90.00th=[ 4146], 95.00th=[ 4424], 00:28:44.903 | 99.00th=[ 6456], 99.50th=[ 6587], 99.90th=[ 7832], 99.95th=[ 7898], 00:28:44.903 | 99.99th=[ 8356] 00:28:44.903 bw ( KiB/s): min=58344, max=67520, per=98.04%, avg=64296.00, stdev=5160.55, samples=3 00:28:44.903 iops : min=14586, max=16880, avg=16074.00, stdev=1290.14, samples=3 00:28:44.903 write: IOPS=16.4k, BW=64.2MiB/s (67.3MB/s)(128MiB/2001msec); 0 zone resets 00:28:44.903 slat (usec): min=5, max=309, avg= 7.04, stdev= 2.83 00:28:44.903 clat (usec): min=297, max=8537, avg=3886.83, stdev=508.05 00:28:44.903 lat (usec): min=303, max=8543, avg=3893.87, stdev=508.81 00:28:44.903 clat percentiles (usec): 00:28:44.903 | 1.00th=[ 3359], 5.00th=[ 3523], 10.00th=[ 3589], 20.00th=[ 3654], 00:28:44.903 | 30.00th=[ 3687], 40.00th=[ 3752], 50.00th=[ 3785], 60.00th=[ 3851], 00:28:44.903 | 70.00th=[ 3884], 80.00th=[ 3949], 90.00th=[ 4146], 95.00th=[ 4424], 00:28:44.903 | 99.00th=[ 6456], 99.50th=[ 6521], 99.90th=[ 7570], 99.95th=[ 7832], 00:28:44.903 | 99.99th=[ 8225] 00:28:44.903 bw ( KiB/s): min=58664, max=67072, per=97.44%, avg=64018.67, stdev=4652.50, samples=3 00:28:44.903 iops : min=14666, max=16768, avg=16004.67, stdev=1163.12, samples=3 00:28:44.903 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:28:44.903 lat (msec) : 2=0.05%, 4=83.51%, 10=16.40% 00:28:44.904 cpu : usr=98.25%, sys=0.45%, ctx=32, majf=0, minf=605 00:28:44.904 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:28:44.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:44.904 issued rwts: total=32807,32866,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.904 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:44.904 00:28:44.904 Run status group 0 (all jobs): 00:28:44.904 READ: bw=64.0MiB/s (67.2MB/s), 64.0MiB/s-64.0MiB/s (67.2MB/s-67.2MB/s), io=128MiB (134MB), run=2001-2001msec 00:28:44.904 WRITE: bw=64.2MiB/s (67.3MB/s), 64.2MiB/s-64.2MiB/s (67.3MB/s-67.3MB/s), io=128MiB (135MB), run=2001-2001msec 00:28:44.904 ----------------------------------------------------- 00:28:44.904 Suppressions used: 00:28:44.904 count bytes template 00:28:44.904 1 32 /usr/src/fio/parse.c 00:28:44.904 1 8 libtcmalloc_minimal.so 00:28:44.904 ----------------------------------------------------- 00:28:44.904 00:28:44.904 07:37:23 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:28:44.904 07:37:23 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:28:44.904 07:37:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:28:44.904 07:37:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:28:45.161 07:37:23 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:28:45.161 07:37:23 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:28:45.419 07:37:23 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:28:45.419 07:37:23 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:28:45.419 07:37:23 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:28:45.419 07:37:23 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:45.419 07:37:23 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:45.419 07:37:23 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:45.419 07:37:23 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:28:45.419 07:37:23 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:28:45.419 07:37:23 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:45.419 07:37:23 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:45.419 07:37:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:28:45.419 07:37:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:28:45.419 07:37:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:45.419 07:37:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:45.419 07:37:23 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:45.419 07:37:23 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:28:45.419 07:37:23 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:28:45.419 07:37:23 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:28:45.677 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:45.677 fio-3.35 00:28:45.677 Starting 1 thread 00:28:48.962 00:28:48.962 test: (groupid=0, jobs=1): err= 0: pid=71253: Mon Jul 15 07:37:27 2024 00:28:48.962 read: IOPS=16.1k, BW=63.0MiB/s (66.0MB/s)(126MiB/2001msec) 00:28:48.962 slat (usec): min=5, max=194, avg= 7.08, stdev= 2.42 00:28:48.962 clat (usec): min=245, max=8251, avg=3946.03, stdev=742.33 00:28:48.962 lat (usec): min=251, max=8335, avg=3953.11, stdev=743.65 00:28:48.962 clat percentiles (usec): 00:28:48.962 | 1.00th=[ 3326], 5.00th=[ 3425], 10.00th=[ 3490], 20.00th=[ 3556], 00:28:48.962 | 30.00th=[ 3589], 40.00th=[ 3621], 50.00th=[ 3687], 60.00th=[ 3752], 00:28:48.962 | 70.00th=[ 3851], 80.00th=[ 4146], 90.00th=[ 4686], 95.00th=[ 5997], 00:28:48.962 | 99.00th=[ 6652], 99.50th=[ 6718], 99.90th=[ 7308], 99.95th=[ 7635], 00:28:48.963 | 99.99th=[ 8029] 00:28:48.963 bw ( KiB/s): min=59456, max=70304, per=99.08%, avg=63884.00, stdev=5691.73, samples=3 00:28:48.963 iops : min=14864, max=17576, avg=15971.00, stdev=1422.93, samples=3 00:28:48.963 write: IOPS=16.1k, BW=63.1MiB/s (66.1MB/s)(126MiB/2001msec); 0 zone resets 00:28:48.963 slat (usec): min=5, max=101, avg= 7.26, stdev= 2.25 00:28:48.963 clat (usec): min=321, max=8085, avg=3953.96, stdev=742.00 00:28:48.963 lat (usec): min=328, max=8110, avg=3961.22, stdev=743.36 00:28:48.963 clat percentiles (usec): 00:28:48.963 | 1.00th=[ 3359], 5.00th=[ 3425], 10.00th=[ 3490], 20.00th=[ 3556], 00:28:48.963 | 30.00th=[ 3589], 40.00th=[ 3654], 50.00th=[ 3687], 60.00th=[ 3752], 00:28:48.963 | 70.00th=[ 3884], 80.00th=[ 4146], 90.00th=[ 4686], 95.00th=[ 6063], 00:28:48.963 | 99.00th=[ 6652], 99.50th=[ 6783], 99.90th=[ 7308], 99.95th=[ 7570], 00:28:48.963 | 99.99th=[ 7898] 00:28:48.963 bw ( KiB/s): min=59712, max=69584, per=98.48%, avg=63615.33, stdev=5250.08, samples=3 00:28:48.963 iops : min=14928, max=17396, avg=15903.67, stdev=1312.62, samples=3 00:28:48.963 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:28:48.963 lat (msec) : 2=0.06%, 4=75.92%, 10=23.99% 00:28:48.963 cpu : usr=99.05%, sys=0.00%, ctx=15, majf=0, minf=606 00:28:48.963 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:28:48.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:48.963 issued rwts: total=32255,32316,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.963 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:48.963 00:28:48.963 Run status group 0 (all jobs): 00:28:48.963 READ: bw=63.0MiB/s (66.0MB/s), 63.0MiB/s-63.0MiB/s (66.0MB/s-66.0MB/s), io=126MiB (132MB), run=2001-2001msec 00:28:48.963 WRITE: bw=63.1MiB/s (66.1MB/s), 63.1MiB/s-63.1MiB/s (66.1MB/s-66.1MB/s), io=126MiB (132MB), run=2001-2001msec 00:28:48.963 ----------------------------------------------------- 00:28:48.963 Suppressions used: 00:28:48.963 count bytes template 00:28:48.963 1 32 /usr/src/fio/parse.c 00:28:48.963 1 8 libtcmalloc_minimal.so 00:28:48.963 ----------------------------------------------------- 00:28:48.963 00:28:48.963 07:37:27 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:28:48.963 07:37:27 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:28:48.963 07:37:27 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:28:48.963 07:37:27 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:28:49.221 07:37:27 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:28:49.221 07:37:27 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:28:49.787 07:37:28 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:28:49.787 07:37:28 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:28:49.787 07:37:28 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:28:49.787 07:37:28 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:49.787 07:37:28 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:49.787 07:37:28 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:49.787 07:37:28 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:28:49.787 07:37:28 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:28:49.787 07:37:28 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:49.787 07:37:28 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:49.787 07:37:28 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:28:49.787 07:37:28 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:28:49.787 07:37:28 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:49.787 07:37:28 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:49.787 07:37:28 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:49.787 07:37:28 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:28:49.787 07:37:28 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:28:49.787 07:37:28 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:28:49.787 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:49.787 fio-3.35 00:28:49.787 Starting 1 thread 00:28:53.966 00:28:53.966 test: (groupid=0, jobs=1): err= 0: pid=71314: Mon Jul 15 07:37:31 2024 00:28:53.966 read: IOPS=16.9k, BW=65.8MiB/s (69.0MB/s)(132MiB/2001msec) 00:28:53.966 slat (nsec): min=4750, max=49987, avg=6438.95, stdev=1726.86 00:28:53.966 clat (usec): min=241, max=7425, avg=3773.37, stdev=429.76 00:28:53.966 lat (usec): min=246, max=7431, avg=3779.81, stdev=430.35 00:28:53.966 clat percentiles (usec): 00:28:53.966 | 1.00th=[ 3228], 5.00th=[ 3425], 10.00th=[ 3458], 20.00th=[ 3523], 00:28:53.966 | 30.00th=[ 3589], 40.00th=[ 3621], 50.00th=[ 3654], 60.00th=[ 3687], 00:28:53.966 | 70.00th=[ 3752], 80.00th=[ 3851], 90.00th=[ 4293], 95.00th=[ 4883], 00:28:53.966 | 99.00th=[ 5145], 99.50th=[ 5342], 99.90th=[ 6652], 99.95th=[ 6980], 00:28:53.966 | 99.99th=[ 7308] 00:28:53.966 bw ( KiB/s): min=66272, max=70184, per=100.00%, avg=68160.00, stdev=1959.54, samples=3 00:28:53.966 iops : min=16568, max=17546, avg=17040.00, stdev=489.89, samples=3 00:28:53.966 write: IOPS=16.9k, BW=66.0MiB/s (69.2MB/s)(132MiB/2001msec); 0 zone resets 00:28:53.966 slat (nsec): min=4831, max=43904, avg=6637.13, stdev=1701.32 00:28:53.966 clat (usec): min=303, max=7536, avg=3782.46, stdev=437.81 00:28:53.966 lat (usec): min=309, max=7542, avg=3789.10, stdev=438.35 00:28:53.966 clat percentiles (usec): 00:28:53.966 | 1.00th=[ 3228], 5.00th=[ 3425], 10.00th=[ 3490], 20.00th=[ 3523], 00:28:53.966 | 30.00th=[ 3589], 40.00th=[ 3621], 50.00th=[ 3654], 60.00th=[ 3687], 00:28:53.966 | 70.00th=[ 3752], 80.00th=[ 3884], 90.00th=[ 4293], 95.00th=[ 4883], 00:28:53.966 | 99.00th=[ 5211], 99.50th=[ 5538], 99.90th=[ 6783], 99.95th=[ 7242], 00:28:53.966 | 99.99th=[ 7373] 00:28:53.966 bw ( KiB/s): min=66528, max=69720, per=100.00%, avg=67976.00, stdev=1616.46, samples=3 00:28:53.966 iops : min=16632, max=17430, avg=16994.00, stdev=404.11, samples=3 00:28:53.966 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:28:53.966 lat (msec) : 2=0.05%, 4=83.26%, 10=16.65% 00:28:53.966 cpu : usr=99.10%, sys=0.00%, ctx=4, majf=0, minf=605 00:28:53.966 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:28:53.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:53.966 issued rwts: total=33721,33802,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:53.966 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:53.966 00:28:53.966 Run status group 0 (all jobs): 00:28:53.966 READ: bw=65.8MiB/s (69.0MB/s), 65.8MiB/s-65.8MiB/s (69.0MB/s-69.0MB/s), io=132MiB (138MB), run=2001-2001msec 00:28:53.966 WRITE: bw=66.0MiB/s (69.2MB/s), 66.0MiB/s-66.0MiB/s (69.2MB/s-69.2MB/s), io=132MiB (138MB), run=2001-2001msec 00:28:53.966 ----------------------------------------------------- 00:28:53.966 Suppressions used: 00:28:53.966 count bytes template 00:28:53.966 1 32 /usr/src/fio/parse.c 00:28:53.966 1 8 libtcmalloc_minimal.so 00:28:53.966 ----------------------------------------------------- 00:28:53.966 00:28:53.966 07:37:31 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:28:53.966 07:37:31 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:28:53.966 07:37:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:28:53.966 07:37:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:28:53.966 07:37:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:28:53.966 07:37:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:28:53.966 07:37:32 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:28:53.966 07:37:32 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:28:53.966 07:37:32 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:28:53.966 07:37:32 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:53.966 07:37:32 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:53.966 07:37:32 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:53.966 07:37:32 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:28:53.966 07:37:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:28:53.966 07:37:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:53.966 07:37:32 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:53.966 07:37:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:28:53.966 07:37:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:28:53.966 07:37:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:53.966 07:37:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:53.966 07:37:32 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:53.966 07:37:32 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:28:53.966 07:37:32 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:28:53.966 07:37:32 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:28:54.246 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:54.246 fio-3.35 00:28:54.246 Starting 1 thread 00:28:58.428 00:28:58.428 test: (groupid=0, jobs=1): err= 0: pid=71375: Mon Jul 15 07:37:36 2024 00:28:58.428 read: IOPS=15.2k, BW=59.3MiB/s (62.2MB/s)(119MiB/2001msec) 00:28:58.428 slat (nsec): min=4934, max=55729, avg=7143.71, stdev=2342.15 00:28:58.428 clat (usec): min=409, max=10311, avg=4196.26, stdev=709.98 00:28:58.428 lat (usec): min=416, max=10366, avg=4203.40, stdev=711.13 00:28:58.428 clat percentiles (usec): 00:28:58.428 | 1.00th=[ 3359], 5.00th=[ 3523], 10.00th=[ 3621], 20.00th=[ 3687], 00:28:58.428 | 30.00th=[ 3752], 40.00th=[ 3818], 50.00th=[ 3949], 60.00th=[ 4228], 00:28:58.428 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 5014], 95.00th=[ 5997], 00:28:58.428 | 99.00th=[ 6456], 99.50th=[ 6587], 99.90th=[ 7373], 99.95th=[ 8455], 00:28:58.428 | 99.99th=[10159] 00:28:58.428 bw ( KiB/s): min=55416, max=65424, per=100.00%, avg=61154.67, stdev=5163.26, samples=3 00:28:58.428 iops : min=13854, max=16356, avg=15288.67, stdev=1290.81, samples=3 00:28:58.428 write: IOPS=15.2k, BW=59.4MiB/s (62.3MB/s)(119MiB/2001msec); 0 zone resets 00:28:58.428 slat (usec): min=4, max=142, avg= 7.30, stdev= 2.39 00:28:58.428 clat (usec): min=306, max=10161, avg=4199.99, stdev=707.80 00:28:58.428 lat (usec): min=314, max=10177, avg=4207.29, stdev=708.91 00:28:58.428 clat percentiles (usec): 00:28:58.428 | 1.00th=[ 3359], 5.00th=[ 3556], 10.00th=[ 3621], 20.00th=[ 3687], 00:28:58.428 | 30.00th=[ 3752], 40.00th=[ 3818], 50.00th=[ 3949], 60.00th=[ 4228], 00:28:58.428 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 5014], 95.00th=[ 5997], 00:28:58.428 | 99.00th=[ 6456], 99.50th=[ 6521], 99.90th=[ 7504], 99.95th=[ 8586], 00:28:58.428 | 99.99th=[ 9765] 00:28:58.428 bw ( KiB/s): min=55736, max=64912, per=99.81%, avg=60701.33, stdev=4634.32, samples=3 00:28:58.428 iops : min=13934, max=16228, avg=15175.33, stdev=1158.58, samples=3 00:28:58.428 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:28:58.428 lat (msec) : 2=0.06%, 4=53.91%, 10=45.99%, 20=0.01% 00:28:58.428 cpu : usr=98.70%, sys=0.35%, ctx=4, majf=0, minf=604 00:28:58.428 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:28:58.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:58.428 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:58.428 issued rwts: total=30367,30424,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:58.428 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:58.428 00:28:58.428 Run status group 0 (all jobs): 00:28:58.428 READ: bw=59.3MiB/s (62.2MB/s), 59.3MiB/s-59.3MiB/s (62.2MB/s-62.2MB/s), io=119MiB (124MB), run=2001-2001msec 00:28:58.428 WRITE: bw=59.4MiB/s (62.3MB/s), 59.4MiB/s-59.4MiB/s (62.3MB/s-62.3MB/s), io=119MiB (125MB), run=2001-2001msec 00:28:58.428 ----------------------------------------------------- 00:28:58.428 Suppressions used: 00:28:58.428 count bytes template 00:28:58.428 1 32 /usr/src/fio/parse.c 00:28:58.428 1 8 libtcmalloc_minimal.so 00:28:58.428 ----------------------------------------------------- 00:28:58.428 00:28:58.428 ************************************ 00:28:58.428 END TEST nvme_fio 00:28:58.428 07:37:36 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:28:58.428 07:37:36 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:28:58.428 00:28:58.428 real 0m17.589s 00:28:58.428 user 0m13.762s 00:28:58.428 sys 0m3.020s 00:28:58.428 07:37:36 nvme.nvme_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:58.428 07:37:36 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:28:58.428 ************************************ 00:28:58.428 07:37:36 nvme -- common/autotest_common.sh@1142 -- # return 0 00:28:58.428 ************************************ 00:28:58.428 END TEST nvme 00:28:58.428 ************************************ 00:28:58.428 00:28:58.428 real 1m33.096s 00:28:58.428 user 3m47.334s 00:28:58.428 sys 0m16.185s 00:28:58.428 07:37:36 nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:58.428 07:37:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:28:58.428 07:37:36 -- common/autotest_common.sh@1142 -- # return 0 00:28:58.428 07:37:36 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:28:58.428 07:37:36 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:28:58.428 07:37:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:58.428 07:37:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:58.428 07:37:36 -- common/autotest_common.sh@10 -- # set +x 00:28:58.428 ************************************ 00:28:58.428 START TEST nvme_scc 00:28:58.428 ************************************ 00:28:58.428 07:37:36 nvme_scc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:28:58.428 * Looking for test storage... 00:28:58.428 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:28:58.428 07:37:36 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:28:58.428 07:37:36 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:28:58.428 07:37:36 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:28:58.428 07:37:36 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:58.428 07:37:36 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:58.428 07:37:36 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.428 07:37:36 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.428 07:37:36 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.428 07:37:36 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.428 07:37:36 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.428 07:37:36 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.428 07:37:36 nvme_scc -- paths/export.sh@5 -- # export PATH 00:28:58.428 07:37:36 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.428 07:37:36 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:28:58.428 07:37:36 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:28:58.428 07:37:36 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:28:58.428 07:37:36 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:28:58.428 07:37:36 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:28:58.428 07:37:36 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:28:58.428 07:37:36 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:28:58.428 07:37:36 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:28:58.428 07:37:36 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:28:58.428 07:37:36 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:58.428 07:37:36 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:28:58.428 07:37:36 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:28:58.428 07:37:36 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:28:58.428 07:37:36 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:58.686 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:58.945 Waiting for block devices as requested 00:28:58.945 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:59.203 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:59.203 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:28:59.203 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:29:04.472 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:29:04.472 07:37:42 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:29:04.472 07:37:42 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:29:04.472 07:37:42 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:29:04.472 07:37:42 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:29:04.473 07:37:42 nvme_scc -- scripts/common.sh@15 -- # local i 00:29:04.473 07:37:42 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:29:04.473 07:37:42 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:29:04.473 07:37:42 nvme_scc -- scripts/common.sh@24 -- # return 0 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@18 -- # shift 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.473 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.474 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@18 -- # shift 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:29:04.475 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.476 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:29:04.477 07:37:42 nvme_scc -- scripts/common.sh@15 -- # local i 00:29:04.477 07:37:42 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:29:04.477 07:37:42 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:29:04.477 07:37:42 nvme_scc -- scripts/common.sh@24 -- # return 0 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@18 -- # shift 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:29:04.477 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.478 07:37:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:29:04.478 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:29:04.478 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.478 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.478 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:29:04.478 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:29:04.478 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.478 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:29:04.479 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@18 -- # shift 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.480 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:29:04.481 07:37:43 nvme_scc -- scripts/common.sh@15 -- # local i 00:29:04.481 07:37:43 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:29:04.481 07:37:43 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:29:04.481 07:37:43 nvme_scc -- scripts/common.sh@24 -- # return 0 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@18 -- # shift 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.481 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:29:04.482 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:29:04.482 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:29:04.482 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.482 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.482 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:29:04.482 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:29:04.482 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:29:04.482 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.482 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.482 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:29:04.482 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:29:04.482 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:29:04.482 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.482 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.482 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:29:04.482 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:29:04.482 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:29:04.482 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:29:04.786 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.787 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.788 07:37:43 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@18 -- # shift 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.789 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@18 -- # shift 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.790 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:29:04.791 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@18 -- # shift 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.792 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:29:04.793 07:37:43 nvme_scc -- scripts/common.sh@15 -- # local i 00:29:04.793 07:37:43 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:29:04.793 07:37:43 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:29:04.793 07:37:43 nvme_scc -- scripts/common.sh@24 -- # return 0 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@18 -- # shift 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.793 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:29:04.794 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.795 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:29:04.796 07:37:43 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:29:04.796 07:37:43 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:29:04.796 07:37:43 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:29:04.796 07:37:43 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:29:04.796 07:37:43 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:05.360 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:05.925 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:29:05.925 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:29:05.925 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:29:05.925 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:29:06.183 07:37:44 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:29:06.183 07:37:44 nvme_scc -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:29:06.183 07:37:44 nvme_scc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:06.183 07:37:44 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:29:06.183 ************************************ 00:29:06.183 START TEST nvme_simple_copy 00:29:06.183 ************************************ 00:29:06.183 07:37:44 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:29:06.441 Initializing NVMe Controllers 00:29:06.441 Attaching to 0000:00:10.0 00:29:06.441 Controller supports SCC. Attached to 0000:00:10.0 00:29:06.441 Namespace ID: 1 size: 6GB 00:29:06.441 Initialization complete. 00:29:06.441 00:29:06.441 Controller QEMU NVMe Ctrl (12340 ) 00:29:06.441 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:29:06.441 Namespace Block Size:4096 00:29:06.441 Writing LBAs 0 to 63 with Random Data 00:29:06.441 Copied LBAs from 0 - 63 to the Destination LBA 256 00:29:06.441 LBAs matching Written Data: 64 00:29:06.441 00:29:06.441 real 0m0.311s 00:29:06.441 user 0m0.118s 00:29:06.441 sys 0m0.090s 00:29:06.441 ************************************ 00:29:06.441 END TEST nvme_simple_copy 00:29:06.441 ************************************ 00:29:06.441 07:37:44 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:06.441 07:37:44 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:29:06.441 07:37:44 nvme_scc -- common/autotest_common.sh@1142 -- # return 0 00:29:06.441 ************************************ 00:29:06.441 END TEST nvme_scc 00:29:06.441 ************************************ 00:29:06.441 00:29:06.441 real 0m8.112s 00:29:06.441 user 0m1.280s 00:29:06.441 sys 0m1.712s 00:29:06.441 07:37:44 nvme_scc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:06.441 07:37:44 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:29:06.441 07:37:45 -- common/autotest_common.sh@1142 -- # return 0 00:29:06.441 07:37:45 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:29:06.441 07:37:45 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:29:06.441 07:37:45 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:29:06.441 07:37:45 -- spdk/autotest.sh@232 -- # [[ 1 -eq 1 ]] 00:29:06.441 07:37:45 -- spdk/autotest.sh@233 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:29:06.441 07:37:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:06.441 07:37:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:06.441 07:37:45 -- common/autotest_common.sh@10 -- # set +x 00:29:06.441 ************************************ 00:29:06.441 START TEST nvme_fdp 00:29:06.441 ************************************ 00:29:06.441 07:37:45 nvme_fdp -- common/autotest_common.sh@1123 -- # test/nvme/nvme_fdp.sh 00:29:06.700 * Looking for test storage... 00:29:06.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:06.700 07:37:45 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:29:06.700 07:37:45 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:29:06.700 07:37:45 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:29:06.700 07:37:45 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:06.700 07:37:45 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:06.700 07:37:45 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:06.700 07:37:45 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:06.700 07:37:45 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:06.700 07:37:45 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.700 07:37:45 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.700 07:37:45 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.700 07:37:45 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:29:06.700 07:37:45 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.700 07:37:45 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:29:06.700 07:37:45 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:29:06.700 07:37:45 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:29:06.700 07:37:45 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:29:06.700 07:37:45 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:29:06.700 07:37:45 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:29:06.700 07:37:45 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:29:06.700 07:37:45 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:29:06.700 07:37:45 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:29:06.700 07:37:45 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:06.700 07:37:45 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:06.958 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:07.216 Waiting for block devices as requested 00:29:07.216 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:07.216 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:07.566 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:29:07.566 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:29:12.832 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:29:12.832 07:37:51 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:29:12.832 07:37:51 nvme_fdp -- scripts/common.sh@15 -- # local i 00:29:12.832 07:37:51 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:29:12.832 07:37:51 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:29:12.832 07:37:51 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.832 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.833 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.834 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:29:12.835 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:29:12.836 07:37:51 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:29:12.836 07:37:51 nvme_fdp -- scripts/common.sh@15 -- # local i 00:29:12.836 07:37:51 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:29:12.837 07:37:51 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:29:12.837 07:37:51 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.837 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:29:12.838 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:29:12.839 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.840 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:29:12.841 07:37:51 nvme_fdp -- scripts/common.sh@15 -- # local i 00:29:12.841 07:37:51 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:29:12.841 07:37:51 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:29:12.841 07:37:51 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.841 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.842 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:29:12.843 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:29:12.844 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:29:12.845 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:29:12.846 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:29:12.847 07:37:51 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:29:13.109 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:29:13.110 07:37:51 nvme_fdp -- scripts/common.sh@15 -- # local i 00:29:13.110 07:37:51 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:29:13.110 07:37:51 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:29:13.110 07:37:51 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.110 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:29:13.111 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:29:13.112 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:29:13.113 07:37:51 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:29:13.113 07:37:51 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:29:13.113 07:37:51 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:29:13.113 07:37:51 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:29:13.113 07:37:51 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:13.695 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:14.276 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:29:14.276 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:29:14.276 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:29:14.276 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:29:14.276 07:37:52 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:29:14.276 07:37:52 nvme_fdp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:29:14.276 07:37:52 nvme_fdp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:14.276 07:37:52 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:29:14.276 ************************************ 00:29:14.276 START TEST nvme_flexible_data_placement 00:29:14.276 ************************************ 00:29:14.276 07:37:52 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:29:14.533 Initializing NVMe Controllers 00:29:14.533 Attaching to 0000:00:13.0 00:29:14.533 Controller supports FDP Attached to 0000:00:13.0 00:29:14.533 Namespace ID: 1 Endurance Group ID: 1 00:29:14.533 Initialization complete. 00:29:14.533 00:29:14.533 ================================== 00:29:14.533 == FDP tests for Namespace: #01 == 00:29:14.533 ================================== 00:29:14.533 00:29:14.533 Get Feature: FDP: 00:29:14.533 ================= 00:29:14.533 Enabled: Yes 00:29:14.533 FDP configuration Index: 0 00:29:14.533 00:29:14.533 FDP configurations log page 00:29:14.533 =========================== 00:29:14.533 Number of FDP configurations: 1 00:29:14.533 Version: 0 00:29:14.533 Size: 112 00:29:14.533 FDP Configuration Descriptor: 0 00:29:14.533 Descriptor Size: 96 00:29:14.533 Reclaim Group Identifier format: 2 00:29:14.533 FDP Volatile Write Cache: Not Present 00:29:14.533 FDP Configuration: Valid 00:29:14.533 Vendor Specific Size: 0 00:29:14.533 Number of Reclaim Groups: 2 00:29:14.533 Number of Recalim Unit Handles: 8 00:29:14.534 Max Placement Identifiers: 128 00:29:14.534 Number of Namespaces Suppprted: 256 00:29:14.534 Reclaim unit Nominal Size: 6000000 bytes 00:29:14.534 Estimated Reclaim Unit Time Limit: Not Reported 00:29:14.534 RUH Desc #000: RUH Type: Initially Isolated 00:29:14.534 RUH Desc #001: RUH Type: Initially Isolated 00:29:14.534 RUH Desc #002: RUH Type: Initially Isolated 00:29:14.534 RUH Desc #003: RUH Type: Initially Isolated 00:29:14.534 RUH Desc #004: RUH Type: Initially Isolated 00:29:14.534 RUH Desc #005: RUH Type: Initially Isolated 00:29:14.534 RUH Desc #006: RUH Type: Initially Isolated 00:29:14.534 RUH Desc #007: RUH Type: Initially Isolated 00:29:14.534 00:29:14.534 FDP reclaim unit handle usage log page 00:29:14.534 ====================================== 00:29:14.534 Number of Reclaim Unit Handles: 8 00:29:14.534 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:29:14.534 RUH Usage Desc #001: RUH Attributes: Unused 00:29:14.534 RUH Usage Desc #002: RUH Attributes: Unused 00:29:14.534 RUH Usage Desc #003: RUH Attributes: Unused 00:29:14.534 RUH Usage Desc #004: RUH Attributes: Unused 00:29:14.534 RUH Usage Desc #005: RUH Attributes: Unused 00:29:14.534 RUH Usage Desc #006: RUH Attributes: Unused 00:29:14.534 RUH Usage Desc #007: RUH Attributes: Unused 00:29:14.534 00:29:14.534 FDP statistics log page 00:29:14.534 ======================= 00:29:14.534 Host bytes with metadata written: 838283264 00:29:14.534 Media bytes with metadata written: 838397952 00:29:14.534 Media bytes erased: 0 00:29:14.534 00:29:14.534 FDP Reclaim unit handle status 00:29:14.534 ============================== 00:29:14.534 Number of RUHS descriptors: 2 00:29:14.534 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x000000000000408d 00:29:14.534 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:29:14.534 00:29:14.534 FDP write on placement id: 0 success 00:29:14.534 00:29:14.534 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:29:14.534 00:29:14.534 IO mgmt send: RUH update for Placement ID: #0 Success 00:29:14.534 00:29:14.534 Get Feature: FDP Events for Placement handle: #0 00:29:14.534 ======================== 00:29:14.534 Number of FDP Events: 6 00:29:14.534 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:29:14.534 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:29:14.534 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:29:14.534 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:29:14.534 FDP Event: #4 Type: Media Reallocated Enabled: No 00:29:14.534 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:29:14.534 00:29:14.534 FDP events log page 00:29:14.534 =================== 00:29:14.534 Number of FDP events: 1 00:29:14.534 FDP Event #0: 00:29:14.534 Event Type: RU Not Written to Capacity 00:29:14.534 Placement Identifier: Valid 00:29:14.534 NSID: Valid 00:29:14.534 Location: Valid 00:29:14.534 Placement Identifier: 0 00:29:14.534 Event Timestamp: 7 00:29:14.534 Namespace Identifier: 1 00:29:14.534 Reclaim Group Identifier: 0 00:29:14.534 Reclaim Unit Handle Identifier: 0 00:29:14.534 00:29:14.534 FDP test passed 00:29:14.534 00:29:14.534 real 0m0.271s 00:29:14.534 user 0m0.087s 00:29:14.534 sys 0m0.083s 00:29:14.534 07:37:53 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:14.534 ************************************ 00:29:14.534 END TEST nvme_flexible_data_placement 00:29:14.534 ************************************ 00:29:14.534 07:37:53 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:29:14.534 07:37:53 nvme_fdp -- common/autotest_common.sh@1142 -- # return 0 00:29:14.534 ************************************ 00:29:14.534 END TEST nvme_fdp 00:29:14.534 ************************************ 00:29:14.534 00:29:14.534 real 0m8.068s 00:29:14.534 user 0m1.250s 00:29:14.534 sys 0m1.735s 00:29:14.534 07:37:53 nvme_fdp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:14.534 07:37:53 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:29:14.534 07:37:53 -- common/autotest_common.sh@1142 -- # return 0 00:29:14.534 07:37:53 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:29:14.534 07:37:53 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:29:14.534 07:37:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:14.534 07:37:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:14.534 07:37:53 -- common/autotest_common.sh@10 -- # set +x 00:29:14.534 ************************************ 00:29:14.534 START TEST nvme_rpc 00:29:14.534 ************************************ 00:29:14.534 07:37:53 nvme_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:29:14.800 * Looking for test storage... 00:29:14.800 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:14.800 07:37:53 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:14.800 07:37:53 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:29:14.800 07:37:53 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:29:14.800 07:37:53 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:29:14.800 07:37:53 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:29:14.800 07:37:53 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:29:14.800 07:37:53 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:29:14.800 07:37:53 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:29:14.800 07:37:53 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:14.800 07:37:53 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:14.800 07:37:53 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:29:14.800 07:37:53 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:29:14.800 07:37:53 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:29:14.800 07:37:53 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:29:14.800 07:37:53 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:29:14.800 07:37:53 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=72706 00:29:14.800 07:37:53 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:29:14.800 07:37:53 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:29:14.800 07:37:53 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 72706 00:29:14.800 07:37:53 nvme_rpc -- common/autotest_common.sh@829 -- # '[' -z 72706 ']' 00:29:14.800 07:37:53 nvme_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:14.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:14.800 07:37:53 nvme_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:14.800 07:37:53 nvme_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:14.800 07:37:53 nvme_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:14.800 07:37:53 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:15.060 [2024-07-15 07:37:53.422276] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:29:15.060 [2024-07-15 07:37:53.422472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72706 ] 00:29:15.060 [2024-07-15 07:37:53.593006] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:15.318 [2024-07-15 07:37:53.866360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.318 [2024-07-15 07:37:53.866373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.252 07:37:54 nvme_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:16.252 07:37:54 nvme_rpc -- common/autotest_common.sh@862 -- # return 0 00:29:16.252 07:37:54 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:29:16.509 Nvme0n1 00:29:16.509 07:37:55 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:29:16.509 07:37:55 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:29:17.086 request: 00:29:17.086 { 00:29:17.086 "bdev_name": "Nvme0n1", 00:29:17.086 "filename": "non_existing_file", 00:29:17.086 "method": "bdev_nvme_apply_firmware", 00:29:17.086 "req_id": 1 00:29:17.086 } 00:29:17.086 Got JSON-RPC error response 00:29:17.086 response: 00:29:17.086 { 00:29:17.087 "code": -32603, 00:29:17.087 "message": "open file failed." 00:29:17.087 } 00:29:17.087 07:37:55 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:29:17.087 07:37:55 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:29:17.087 07:37:55 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:29:17.087 07:37:55 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:29:17.087 07:37:55 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 72706 00:29:17.087 07:37:55 nvme_rpc -- common/autotest_common.sh@948 -- # '[' -z 72706 ']' 00:29:17.087 07:37:55 nvme_rpc -- common/autotest_common.sh@952 -- # kill -0 72706 00:29:17.087 07:37:55 nvme_rpc -- common/autotest_common.sh@953 -- # uname 00:29:17.087 07:37:55 nvme_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:17.087 07:37:55 nvme_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72706 00:29:17.087 07:37:55 nvme_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:17.087 killing process with pid 72706 00:29:17.087 07:37:55 nvme_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:17.087 07:37:55 nvme_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72706' 00:29:17.087 07:37:55 nvme_rpc -- common/autotest_common.sh@967 -- # kill 72706 00:29:17.087 07:37:55 nvme_rpc -- common/autotest_common.sh@972 -- # wait 72706 00:29:19.616 00:29:19.616 real 0m4.916s 00:29:19.616 user 0m8.966s 00:29:19.616 sys 0m0.816s 00:29:19.616 07:37:58 nvme_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:19.616 ************************************ 00:29:19.616 07:37:58 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:19.616 END TEST nvme_rpc 00:29:19.616 ************************************ 00:29:19.616 07:37:58 -- common/autotest_common.sh@1142 -- # return 0 00:29:19.616 07:37:58 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:29:19.616 07:37:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:19.616 07:37:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:19.616 07:37:58 -- common/autotest_common.sh@10 -- # set +x 00:29:19.616 ************************************ 00:29:19.616 START TEST nvme_rpc_timeouts 00:29:19.616 ************************************ 00:29:19.616 07:37:58 nvme_rpc_timeouts -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:29:19.616 * Looking for test storage... 00:29:19.616 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:19.616 07:37:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:19.616 07:37:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_72788 00:29:19.616 07:37:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_72788 00:29:19.616 07:37:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=72816 00:29:19.616 07:37:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:29:19.616 07:37:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:29:19.616 07:37:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 72816 00:29:19.616 07:37:58 nvme_rpc_timeouts -- common/autotest_common.sh@829 -- # '[' -z 72816 ']' 00:29:19.616 07:37:58 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.616 07:37:58 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:19.616 07:37:58 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.616 07:37:58 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:19.616 07:37:58 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:29:19.875 [2024-07-15 07:37:58.307799] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:29:19.875 [2024-07-15 07:37:58.307975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72816 ] 00:29:19.875 [2024-07-15 07:37:58.481157] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:20.442 [2024-07-15 07:37:58.818425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.442 [2024-07-15 07:37:58.818438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.376 07:37:59 nvme_rpc_timeouts -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:21.376 07:37:59 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # return 0 00:29:21.376 07:37:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:29:21.376 Checking default timeout settings: 00:29:21.376 07:37:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:21.633 Making settings changes with rpc: 00:29:21.633 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:29:21.633 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:29:21.890 Check default vs. modified settings: 00:29:21.890 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:29:21.890 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_72788 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_72788 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:22.455 Setting action_on_timeout is changed as expected. 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_72788 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_72788 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:22.455 Setting timeout_us is changed as expected. 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_72788 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_72788 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:22.455 Setting timeout_admin_us is changed as expected. 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_72788 /tmp/settings_modified_72788 00:29:22.455 07:38:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 72816 00:29:22.455 07:38:00 nvme_rpc_timeouts -- common/autotest_common.sh@948 -- # '[' -z 72816 ']' 00:29:22.455 07:38:00 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # kill -0 72816 00:29:22.455 07:38:00 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # uname 00:29:22.455 07:38:00 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:22.455 07:38:00 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72816 00:29:22.455 killing process with pid 72816 00:29:22.455 07:38:00 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:22.455 07:38:00 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:22.455 07:38:00 nvme_rpc_timeouts -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72816' 00:29:22.455 07:38:00 nvme_rpc_timeouts -- common/autotest_common.sh@967 -- # kill 72816 00:29:22.455 07:38:00 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # wait 72816 00:29:24.980 RPC TIMEOUT SETTING TEST PASSED. 00:29:24.980 07:38:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:29:24.980 ************************************ 00:29:24.980 END TEST nvme_rpc_timeouts 00:29:24.980 ************************************ 00:29:24.980 00:29:24.980 real 0m5.240s 00:29:24.980 user 0m9.777s 00:29:24.980 sys 0m0.843s 00:29:24.980 07:38:03 nvme_rpc_timeouts -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:24.980 07:38:03 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:29:24.980 07:38:03 -- common/autotest_common.sh@1142 -- # return 0 00:29:24.980 07:38:03 -- spdk/autotest.sh@243 -- # uname -s 00:29:24.980 07:38:03 -- spdk/autotest.sh@243 -- # '[' Linux = Linux ']' 00:29:24.980 07:38:03 -- spdk/autotest.sh@244 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:29:24.980 07:38:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:24.980 07:38:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:24.980 07:38:03 -- common/autotest_common.sh@10 -- # set +x 00:29:24.980 ************************************ 00:29:24.980 START TEST sw_hotplug 00:29:24.980 ************************************ 00:29:24.980 07:38:03 sw_hotplug -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:29:24.980 * Looking for test storage... 00:29:24.980 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:24.980 07:38:03 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:25.249 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:25.526 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:25.526 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:25.526 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:25.526 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:25.526 07:38:03 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:29:25.526 07:38:03 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:29:25.526 07:38:03 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:29:25.526 07:38:03 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:29:25.526 07:38:03 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:29:25.526 07:38:03 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:29:25.526 07:38:03 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:29:25.526 07:38:03 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:29:25.526 07:38:03 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:29:25.526 07:38:03 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:29:25.526 07:38:03 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@230 -- # local class 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@15 -- # local i 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@15 -- # local i 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@15 -- # local i 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@15 -- # local i 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:29:25.526 07:38:04 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:29:25.527 07:38:04 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:29:25.527 07:38:04 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:29:25.527 07:38:04 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:29:25.527 07:38:04 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:29:25.527 07:38:04 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:29:25.527 07:38:04 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:29:25.527 07:38:04 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:29:25.527 07:38:04 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:29:25.527 07:38:04 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:29:25.527 07:38:04 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:29:25.527 07:38:04 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:29:25.527 07:38:04 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:29:25.527 07:38:04 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:29:25.527 07:38:04 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:29:25.527 07:38:04 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:29:25.527 07:38:04 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:29:25.527 07:38:04 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:29:25.527 07:38:04 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:29:25.527 07:38:04 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:29:25.527 07:38:04 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:29:25.527 07:38:04 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:29:25.527 07:38:04 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:25.784 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:26.042 Waiting for block devices as requested 00:29:26.042 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:26.301 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:26.301 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:29:26.301 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:29:31.562 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:29:31.562 07:38:09 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:29:31.562 07:38:09 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:31.818 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:29:31.818 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:31.818 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:29:32.383 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:29:32.641 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:29:32.641 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:29:32.641 07:38:11 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:29:32.641 07:38:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:29:32.641 07:38:11 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:29:32.641 07:38:11 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:29:32.641 07:38:11 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=73674 00:29:32.641 07:38:11 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:29:32.641 07:38:11 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:29:32.641 07:38:11 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:29:32.641 07:38:11 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:29:32.641 07:38:11 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:29:32.641 07:38:11 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:29:32.641 07:38:11 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:29:32.641 07:38:11 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:29:32.641 07:38:11 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 false 00:29:32.641 07:38:11 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:29:32.641 07:38:11 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:29:32.641 07:38:11 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:29:32.641 07:38:11 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:29:32.641 07:38:11 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:29:32.899 Initializing NVMe Controllers 00:29:32.899 Attaching to 0000:00:10.0 00:29:32.899 Attaching to 0000:00:11.0 00:29:32.899 Attached to 0000:00:10.0 00:29:32.899 Attached to 0000:00:11.0 00:29:32.899 Initialization complete. Starting I/O... 00:29:32.899 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:29:32.899 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:29:32.899 00:29:33.900 QEMU NVMe Ctrl (12340 ): 1021 I/Os completed (+1021) 00:29:33.900 QEMU NVMe Ctrl (12341 ): 1069 I/Os completed (+1069) 00:29:33.900 00:29:35.271 QEMU NVMe Ctrl (12340 ): 2233 I/Os completed (+1212) 00:29:35.271 QEMU NVMe Ctrl (12341 ): 2417 I/Os completed (+1348) 00:29:35.271 00:29:36.202 QEMU NVMe Ctrl (12340 ): 3834 I/Os completed (+1601) 00:29:36.202 QEMU NVMe Ctrl (12341 ): 4169 I/Os completed (+1752) 00:29:36.202 00:29:37.135 QEMU NVMe Ctrl (12340 ): 5481 I/Os completed (+1647) 00:29:37.135 QEMU NVMe Ctrl (12341 ): 5933 I/Os completed (+1764) 00:29:37.135 00:29:38.069 QEMU NVMe Ctrl (12340 ): 7087 I/Os completed (+1606) 00:29:38.069 QEMU NVMe Ctrl (12341 ): 7681 I/Os completed (+1748) 00:29:38.069 00:29:38.637 07:38:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:29:38.637 07:38:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:29:38.637 07:38:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:29:38.637 [2024-07-15 07:38:17.221147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:29:38.637 Controller removed: QEMU NVMe Ctrl (12340 ) 00:29:38.637 [2024-07-15 07:38:17.223242] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:38.637 [2024-07-15 07:38:17.223332] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:38.637 [2024-07-15 07:38:17.223366] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:38.637 [2024-07-15 07:38:17.223397] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:38.637 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:29:38.637 [2024-07-15 07:38:17.226488] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:38.637 [2024-07-15 07:38:17.226549] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:38.637 [2024-07-15 07:38:17.226576] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:38.637 [2024-07-15 07:38:17.226602] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:38.637 07:38:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:29:38.637 07:38:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:29:38.895 [2024-07-15 07:38:17.251310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:29:38.895 Controller removed: QEMU NVMe Ctrl (12341 ) 00:29:38.895 [2024-07-15 07:38:17.253288] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:38.895 [2024-07-15 07:38:17.253493] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:38.895 [2024-07-15 07:38:17.253679] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:38.895 [2024-07-15 07:38:17.253856] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:38.895 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:29:38.895 [2024-07-15 07:38:17.256967] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:38.895 [2024-07-15 07:38:17.257137] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:38.895 [2024-07-15 07:38:17.257275] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:38.895 [2024-07-15 07:38:17.257423] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:38.895 07:38:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:29:38.895 07:38:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:29:38.895 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:29:38.895 EAL: Scan for (pci) bus failed. 00:29:38.895 07:38:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:29:38.895 07:38:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:29:38.895 07:38:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:29:38.895 07:38:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:29:38.895 07:38:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:29:38.895 07:38:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:29:38.895 07:38:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:29:38.895 07:38:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:29:38.895 Attaching to 0000:00:10.0 00:29:38.895 Attached to 0000:00:10.0 00:29:38.895 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:29:38.895 00:29:39.158 07:38:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:29:39.158 07:38:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:29:39.158 07:38:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:29:39.158 Attaching to 0000:00:11.0 00:29:39.158 Attached to 0000:00:11.0 00:29:40.092 QEMU NVMe Ctrl (12340 ): 1750 I/Os completed (+1750) 00:29:40.092 QEMU NVMe Ctrl (12341 ): 1600 I/Os completed (+1600) 00:29:40.092 00:29:41.028 QEMU NVMe Ctrl (12340 ): 3478 I/Os completed (+1728) 00:29:41.028 QEMU NVMe Ctrl (12341 ): 3354 I/Os completed (+1754) 00:29:41.028 00:29:41.963 QEMU NVMe Ctrl (12340 ): 5166 I/Os completed (+1688) 00:29:41.963 QEMU NVMe Ctrl (12341 ): 5157 I/Os completed (+1803) 00:29:41.963 00:29:42.898 QEMU NVMe Ctrl (12340 ): 6824 I/Os completed (+1658) 00:29:42.898 QEMU NVMe Ctrl (12341 ): 6940 I/Os completed (+1783) 00:29:42.898 00:29:44.273 QEMU NVMe Ctrl (12340 ): 8352 I/Os completed (+1528) 00:29:44.273 QEMU NVMe Ctrl (12341 ): 8667 I/Os completed (+1727) 00:29:44.273 00:29:45.261 QEMU NVMe Ctrl (12340 ): 9960 I/Os completed (+1608) 00:29:45.261 QEMU NVMe Ctrl (12341 ): 10389 I/Os completed (+1722) 00:29:45.261 00:29:46.209 QEMU NVMe Ctrl (12340 ): 11593 I/Os completed (+1633) 00:29:46.209 QEMU NVMe Ctrl (12341 ): 12084 I/Os completed (+1695) 00:29:46.209 00:29:47.143 QEMU NVMe Ctrl (12340 ): 13173 I/Os completed (+1580) 00:29:47.143 QEMU NVMe Ctrl (12341 ): 13760 I/Os completed (+1676) 00:29:47.143 00:29:48.078 QEMU NVMe Ctrl (12340 ): 14845 I/Os completed (+1672) 00:29:48.078 QEMU NVMe Ctrl (12341 ): 15447 I/Os completed (+1687) 00:29:48.078 00:29:49.053 QEMU NVMe Ctrl (12340 ): 16365 I/Os completed (+1520) 00:29:49.053 QEMU NVMe Ctrl (12341 ): 17110 I/Os completed (+1663) 00:29:49.053 00:29:49.986 QEMU NVMe Ctrl (12340 ): 18065 I/Os completed (+1700) 00:29:49.986 QEMU NVMe Ctrl (12341 ): 18850 I/Os completed (+1740) 00:29:49.986 00:29:50.920 QEMU NVMe Ctrl (12340 ): 19778 I/Os completed (+1713) 00:29:50.920 QEMU NVMe Ctrl (12341 ): 20609 I/Os completed (+1759) 00:29:50.920 00:29:51.179 07:38:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:29:51.179 07:38:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:29:51.179 07:38:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:29:51.179 07:38:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:29:51.179 [2024-07-15 07:38:29.568998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:29:51.179 Controller removed: QEMU NVMe Ctrl (12340 ) 00:29:51.179 [2024-07-15 07:38:29.571308] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:51.179 [2024-07-15 07:38:29.571532] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:51.179 [2024-07-15 07:38:29.571713] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:51.179 [2024-07-15 07:38:29.571790] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:51.179 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:29:51.179 [2024-07-15 07:38:29.575182] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:51.179 [2024-07-15 07:38:29.575362] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:51.179 [2024-07-15 07:38:29.575399] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:51.179 [2024-07-15 07:38:29.575425] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:51.179 07:38:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:29:51.179 07:38:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:29:51.179 [2024-07-15 07:38:29.596661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:29:51.179 Controller removed: QEMU NVMe Ctrl (12341 ) 00:29:51.179 [2024-07-15 07:38:29.598623] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:51.179 [2024-07-15 07:38:29.598723] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:51.179 [2024-07-15 07:38:29.598817] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:51.179 [2024-07-15 07:38:29.598883] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:51.179 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:29:51.179 [2024-07-15 07:38:29.602119] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:51.179 [2024-07-15 07:38:29.602190] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:51.179 [2024-07-15 07:38:29.602231] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:51.179 [2024-07-15 07:38:29.602256] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:51.179 07:38:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:29:51.179 07:38:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:29:51.179 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:29:51.179 EAL: Scan for (pci) bus failed. 00:29:51.179 07:38:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:29:51.179 07:38:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:29:51.179 07:38:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:29:51.179 07:38:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:29:51.438 07:38:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:29:51.438 07:38:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:29:51.438 07:38:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:29:51.438 07:38:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:29:51.438 Attaching to 0000:00:10.0 00:29:51.438 Attached to 0000:00:10.0 00:29:51.438 07:38:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:29:51.438 07:38:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:29:51.438 07:38:29 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:29:51.438 Attaching to 0000:00:11.0 00:29:51.438 Attached to 0000:00:11.0 00:29:52.004 QEMU NVMe Ctrl (12340 ): 1096 I/Os completed (+1096) 00:29:52.004 QEMU NVMe Ctrl (12341 ): 903 I/Os completed (+903) 00:29:52.004 00:29:52.937 QEMU NVMe Ctrl (12340 ): 2540 I/Os completed (+1444) 00:29:52.937 QEMU NVMe Ctrl (12341 ): 2450 I/Os completed (+1547) 00:29:52.937 00:29:53.872 QEMU NVMe Ctrl (12340 ): 3968 I/Os completed (+1428) 00:29:53.872 QEMU NVMe Ctrl (12341 ): 3990 I/Os completed (+1540) 00:29:53.872 00:29:55.257 QEMU NVMe Ctrl (12340 ): 5548 I/Os completed (+1580) 00:29:55.257 QEMU NVMe Ctrl (12341 ): 5612 I/Os completed (+1622) 00:29:55.257 00:29:56.191 QEMU NVMe Ctrl (12340 ): 7040 I/Os completed (+1492) 00:29:56.191 QEMU NVMe Ctrl (12341 ): 7389 I/Os completed (+1777) 00:29:56.191 00:29:57.125 QEMU NVMe Ctrl (12340 ): 8545 I/Os completed (+1505) 00:29:57.125 QEMU NVMe Ctrl (12341 ): 8989 I/Os completed (+1600) 00:29:57.125 00:29:58.060 QEMU NVMe Ctrl (12340 ): 10105 I/Os completed (+1560) 00:29:58.060 QEMU NVMe Ctrl (12341 ): 10658 I/Os completed (+1669) 00:29:58.060 00:29:59.000 QEMU NVMe Ctrl (12340 ): 11645 I/Os completed (+1540) 00:29:59.000 QEMU NVMe Ctrl (12341 ): 12249 I/Os completed (+1591) 00:29:59.000 00:29:59.935 QEMU NVMe Ctrl (12340 ): 13137 I/Os completed (+1492) 00:29:59.935 QEMU NVMe Ctrl (12341 ): 13832 I/Os completed (+1583) 00:29:59.935 00:30:00.870 QEMU NVMe Ctrl (12340 ): 14783 I/Os completed (+1646) 00:30:00.870 QEMU NVMe Ctrl (12341 ): 15506 I/Os completed (+1674) 00:30:00.870 00:30:02.321 QEMU NVMe Ctrl (12340 ): 16439 I/Os completed (+1656) 00:30:02.321 QEMU NVMe Ctrl (12341 ): 17216 I/Os completed (+1710) 00:30:02.321 00:30:02.889 QEMU NVMe Ctrl (12340 ): 18051 I/Os completed (+1612) 00:30:02.889 QEMU NVMe Ctrl (12341 ): 18881 I/Os completed (+1665) 00:30:02.889 00:30:03.455 07:38:41 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:30:03.455 07:38:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:30:03.455 07:38:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:30:03.455 07:38:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:30:03.455 [2024-07-15 07:38:41.928037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:30:03.455 Controller removed: QEMU NVMe Ctrl (12340 ) 00:30:03.455 [2024-07-15 07:38:41.931419] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:03.455 [2024-07-15 07:38:41.931619] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:03.455 [2024-07-15 07:38:41.931696] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:03.455 [2024-07-15 07:38:41.931848] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:03.455 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:30:03.455 [2024-07-15 07:38:41.935238] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:03.455 [2024-07-15 07:38:41.935343] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:03.455 [2024-07-15 07:38:41.935411] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:03.455 [2024-07-15 07:38:41.935492] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:03.455 07:38:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:30:03.455 07:38:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:30:03.455 [2024-07-15 07:38:41.954391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:30:03.455 Controller removed: QEMU NVMe Ctrl (12341 ) 00:30:03.455 [2024-07-15 07:38:41.960241] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:03.455 [2024-07-15 07:38:41.960422] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:03.455 [2024-07-15 07:38:41.960527] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:03.455 [2024-07-15 07:38:41.960695] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:03.455 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:30:03.455 [2024-07-15 07:38:41.963783] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:03.455 [2024-07-15 07:38:41.963965] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:03.455 [2024-07-15 07:38:41.964106] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:03.455 07:38:41 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:30:03.455 [2024-07-15 07:38:41.964190] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:03.455 07:38:41 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:30:03.455 07:38:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:30:03.455 07:38:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:30:03.455 07:38:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:30:03.714 07:38:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:30:03.714 07:38:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:30:03.714 07:38:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:30:03.714 07:38:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:30:03.714 07:38:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:30:03.714 Attaching to 0000:00:10.0 00:30:03.714 Attached to 0000:00:10.0 00:30:03.714 07:38:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:30:03.714 07:38:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:30:03.714 07:38:42 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:30:03.714 Attaching to 0000:00:11.0 00:30:03.714 Attached to 0000:00:11.0 00:30:03.714 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:30:03.714 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:30:03.714 [2024-07-15 07:38:42.279979] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:30:15.900 07:38:54 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:30:15.900 07:38:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:30:15.900 07:38:54 sw_hotplug -- common/autotest_common.sh@715 -- # time=43.05 00:30:15.900 07:38:54 sw_hotplug -- common/autotest_common.sh@716 -- # echo 43.05 00:30:15.900 07:38:54 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:30:15.900 07:38:54 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.05 00:30:15.900 07:38:54 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.05 2 00:30:15.900 remove_attach_helper took 43.05s to complete (handling 2 nvme drive(s)) 07:38:54 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:30:22.481 07:39:00 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 73674 00:30:22.481 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (73674) - No such process 00:30:22.481 07:39:00 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 73674 00:30:22.481 07:39:00 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:30:22.481 07:39:00 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:30:22.481 07:39:00 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:30:22.481 07:39:00 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=74217 00:30:22.481 07:39:00 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:30:22.481 07:39:00 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:22.481 07:39:00 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 74217 00:30:22.481 07:39:00 sw_hotplug -- common/autotest_common.sh@829 -- # '[' -z 74217 ']' 00:30:22.481 07:39:00 sw_hotplug -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:22.481 07:39:00 sw_hotplug -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:22.481 07:39:00 sw_hotplug -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:22.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:22.481 07:39:00 sw_hotplug -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:22.481 07:39:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:30:22.481 [2024-07-15 07:39:00.393508] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:30:22.481 [2024-07-15 07:39:00.393697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74217 ] 00:30:22.481 [2024-07-15 07:39:00.560043] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:22.481 [2024-07-15 07:39:00.831977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.415 07:39:01 sw_hotplug -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:23.415 07:39:01 sw_hotplug -- common/autotest_common.sh@862 -- # return 0 00:30:23.415 07:39:01 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:30:23.415 07:39:01 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.415 07:39:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:30:23.415 07:39:01 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.415 07:39:01 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:30:23.415 07:39:01 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:30:23.415 07:39:01 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:30:23.415 07:39:01 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:30:23.415 07:39:01 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:30:23.415 07:39:01 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:30:23.415 07:39:01 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:30:23.415 07:39:01 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:30:23.415 07:39:01 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:30:23.415 07:39:01 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:30:23.415 07:39:01 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:30:23.415 07:39:01 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:30:23.415 07:39:01 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:30:29.981 07:39:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:30:29.981 07:39:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:30:29.981 07:39:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:30:29.981 07:39:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:30:29.981 07:39:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:30:29.981 07:39:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:30:29.981 07:39:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:30:29.981 07:39:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:30:29.981 07:39:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:30:29.981 07:39:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:30:29.981 07:39:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:30:29.981 07:39:07 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.981 07:39:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:30:29.981 [2024-07-15 07:39:07.836117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:30:29.981 [2024-07-15 07:39:07.839167] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:29.981 [2024-07-15 07:39:07.839221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:30:29.981 [2024-07-15 07:39:07.839274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.981 [2024-07-15 07:39:07.839306] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:29.981 [2024-07-15 07:39:07.839338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:30:29.981 [2024-07-15 07:39:07.839353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.981 [2024-07-15 07:39:07.839372] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:29.981 [2024-07-15 07:39:07.839387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:30:29.981 [2024-07-15 07:39:07.839404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.981 [2024-07-15 07:39:07.839419] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:29.981 [2024-07-15 07:39:07.839439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:30:29.981 [2024-07-15 07:39:07.839467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.981 07:39:07 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.981 07:39:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:30:29.981 07:39:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:30:29.981 [2024-07-15 07:39:08.336154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:30:29.981 [2024-07-15 07:39:08.339562] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:29.981 [2024-07-15 07:39:08.339634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:30:29.981 [2024-07-15 07:39:08.339658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.981 [2024-07-15 07:39:08.339692] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:29.981 [2024-07-15 07:39:08.339709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:30:29.981 [2024-07-15 07:39:08.339727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.981 [2024-07-15 07:39:08.339743] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:29.981 [2024-07-15 07:39:08.339760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:30:29.981 [2024-07-15 07:39:08.339774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.981 [2024-07-15 07:39:08.339793] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:29.981 [2024-07-15 07:39:08.339807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:30:29.981 [2024-07-15 07:39:08.339824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.981 07:39:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:30:29.981 07:39:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:30:29.981 07:39:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:30:29.981 07:39:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:30:29.981 07:39:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:30:29.981 07:39:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:30:29.981 07:39:08 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.981 07:39:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:30:29.981 07:39:08 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.981 07:39:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:30:29.981 07:39:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:30:29.981 07:39:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:30:29.981 07:39:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:30:29.981 07:39:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:30:30.238 07:39:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:30:30.238 07:39:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:30:30.238 07:39:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:30:30.238 07:39:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:30:30.238 07:39:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:30:30.238 07:39:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:30:30.238 07:39:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:30:30.238 07:39:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:30:42.476 07:39:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:30:42.476 07:39:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:30:42.476 07:39:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:30:42.476 07:39:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:30:42.476 07:39:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:30:42.476 07:39:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:30:42.476 07:39:20 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.476 07:39:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:30:42.476 07:39:20 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.476 07:39:20 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:30:42.476 07:39:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:30:42.476 07:39:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:30:42.476 07:39:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:30:42.476 07:39:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:30:42.476 07:39:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:30:42.476 [2024-07-15 07:39:20.836319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:30:42.476 [2024-07-15 07:39:20.839828] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:42.476 [2024-07-15 07:39:20.839999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.476 [2024-07-15 07:39:20.840159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.476 [2024-07-15 07:39:20.840404] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:42.476 [2024-07-15 07:39:20.840546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.476 [2024-07-15 07:39:20.840763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.476 [2024-07-15 07:39:20.840924] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:42.476 [2024-07-15 07:39:20.841086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.476 [2024-07-15 07:39:20.841236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.476 [2024-07-15 07:39:20.841407] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:42.476 [2024-07-15 07:39:20.841545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.476 [2024-07-15 07:39:20.841692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.476 07:39:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:30:42.476 07:39:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:30:42.476 07:39:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:30:42.476 07:39:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:30:42.476 07:39:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:30:42.476 07:39:20 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.476 07:39:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:30:42.476 07:39:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:30:42.476 07:39:20 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.476 07:39:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:30:42.476 07:39:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:30:42.734 [2024-07-15 07:39:21.236348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:30:42.734 [2024-07-15 07:39:21.239479] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:42.734 [2024-07-15 07:39:21.239537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.734 [2024-07-15 07:39:21.239561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.734 [2024-07-15 07:39:21.239597] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:42.734 [2024-07-15 07:39:21.239612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.734 [2024-07-15 07:39:21.239669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.734 [2024-07-15 07:39:21.239686] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:42.734 [2024-07-15 07:39:21.239703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.734 [2024-07-15 07:39:21.239718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.734 [2024-07-15 07:39:21.239737] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:42.735 [2024-07-15 07:39:21.239751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:30:42.735 [2024-07-15 07:39:21.239768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:42.992 07:39:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:30:42.992 07:39:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:30:42.992 07:39:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:30:42.992 07:39:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:30:42.992 07:39:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:30:42.992 07:39:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:30:42.992 07:39:21 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:42.992 07:39:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:30:42.992 07:39:21 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:42.992 07:39:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:30:42.992 07:39:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:30:42.992 07:39:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:30:42.992 07:39:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:30:42.992 07:39:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:30:43.251 07:39:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:30:43.251 07:39:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:30:43.251 07:39:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:30:43.251 07:39:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:30:43.251 07:39:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:30:43.251 07:39:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:30:43.251 07:39:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:30:43.251 07:39:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:30:55.457 07:39:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:30:55.457 07:39:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:30:55.457 07:39:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:30:55.457 07:39:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:30:55.457 07:39:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:30:55.457 07:39:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:30:55.457 07:39:33 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.457 07:39:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:30:55.457 07:39:33 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.457 07:39:33 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:30:55.457 07:39:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:30:55.457 07:39:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:30:55.457 07:39:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:30:55.457 07:39:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:30:55.457 07:39:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:30:55.457 07:39:33 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:30:55.457 07:39:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:30:55.457 07:39:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:30:55.457 07:39:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:30:55.457 07:39:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:30:55.457 07:39:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:30:55.457 07:39:33 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.457 07:39:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:30:55.457 [2024-07-15 07:39:33.836503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:30:55.457 [2024-07-15 07:39:33.840099] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:55.457 [2024-07-15 07:39:33.840267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.457 [2024-07-15 07:39:33.840475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.457 [2024-07-15 07:39:33.840678] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:55.457 [2024-07-15 07:39:33.840845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.457 [2024-07-15 07:39:33.841007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.457 [2024-07-15 07:39:33.841192] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:55.457 [2024-07-15 07:39:33.841317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.457 [2024-07-15 07:39:33.841474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.457 [2024-07-15 07:39:33.841668] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:55.457 [2024-07-15 07:39:33.841806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.457 [2024-07-15 07:39:33.841947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.457 07:39:33 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.457 07:39:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:30:55.457 07:39:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:30:55.723 [2024-07-15 07:39:34.236550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:30:55.723 [2024-07-15 07:39:34.240113] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:55.724 [2024-07-15 07:39:34.240301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.724 [2024-07-15 07:39:34.240483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.724 [2024-07-15 07:39:34.240671] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:55.724 [2024-07-15 07:39:34.240852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.724 [2024-07-15 07:39:34.241023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.724 [2024-07-15 07:39:34.241179] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:55.724 [2024-07-15 07:39:34.241382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.724 [2024-07-15 07:39:34.241605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.724 [2024-07-15 07:39:34.241769] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:30:55.724 [2024-07-15 07:39:34.241885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.724 [2024-07-15 07:39:34.242048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.981 07:39:34 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:30:55.981 07:39:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:30:55.981 07:39:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:30:55.981 07:39:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:30:55.981 07:39:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:30:55.981 07:39:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:30:55.981 07:39:34 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.981 07:39:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:30:55.981 07:39:34 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.981 07:39:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:30:55.981 07:39:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:30:55.981 07:39:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:30:55.981 07:39:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:30:55.981 07:39:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:30:56.238 07:39:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:30:56.238 07:39:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:30:56.238 07:39:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:30:56.238 07:39:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:30:56.238 07:39:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:30:56.238 07:39:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:30:56.238 07:39:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:30:56.238 07:39:34 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:31:08.490 07:39:46 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:31:08.491 07:39:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:31:08.491 07:39:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:31:08.491 07:39:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:31:08.491 07:39:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:31:08.491 07:39:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:31:08.491 07:39:46 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.491 07:39:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:31:08.491 07:39:46 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.491 07:39:46 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:31:08.491 07:39:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:31:08.491 07:39:46 sw_hotplug -- common/autotest_common.sh@715 -- # time=45.05 00:31:08.491 07:39:46 sw_hotplug -- common/autotest_common.sh@716 -- # echo 45.05 00:31:08.491 07:39:46 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:31:08.491 07:39:46 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.05 00:31:08.491 07:39:46 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.05 2 00:31:08.491 remove_attach_helper took 45.05s to complete (handling 2 nvme drive(s)) 07:39:46 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:31:08.491 07:39:46 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.491 07:39:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:31:08.491 07:39:46 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.491 07:39:46 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:31:08.491 07:39:46 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.491 07:39:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:31:08.491 07:39:46 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.491 07:39:46 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:31:08.491 07:39:46 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:31:08.491 07:39:46 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:31:08.491 07:39:46 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:31:08.491 07:39:46 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:31:08.491 07:39:46 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:31:08.491 07:39:46 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:31:08.491 07:39:46 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:31:08.491 07:39:46 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:31:08.491 07:39:46 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:31:08.491 07:39:46 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:31:08.491 07:39:46 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:31:08.491 07:39:46 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:31:15.049 07:39:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:31:15.049 07:39:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:31:15.049 07:39:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:31:15.049 07:39:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:31:15.049 07:39:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:31:15.049 07:39:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:31:15.049 07:39:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:31:15.049 07:39:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:31:15.049 07:39:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:31:15.049 07:39:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:31:15.049 07:39:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:31:15.049 07:39:52 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.049 07:39:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:31:15.049 07:39:52 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.049 [2024-07-15 07:39:52.912231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:31:15.049 [2024-07-15 07:39:52.914645] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:31:15.049 [2024-07-15 07:39:52.914700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.049 [2024-07-15 07:39:52.914739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.049 [2024-07-15 07:39:52.914771] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:31:15.049 [2024-07-15 07:39:52.914803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.049 [2024-07-15 07:39:52.914820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.049 [2024-07-15 07:39:52.914842] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:31:15.049 [2024-07-15 07:39:52.914857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.049 [2024-07-15 07:39:52.914877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.049 [2024-07-15 07:39:52.914893] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:31:15.049 [2024-07-15 07:39:52.914913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.049 [2024-07-15 07:39:52.914927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.049 07:39:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:31:15.049 07:39:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:31:15.049 [2024-07-15 07:39:53.312251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:31:15.049 [2024-07-15 07:39:53.314799] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:31:15.049 [2024-07-15 07:39:53.314993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.049 [2024-07-15 07:39:53.315140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.049 [2024-07-15 07:39:53.315185] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:31:15.049 [2024-07-15 07:39:53.315204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.049 [2024-07-15 07:39:53.315222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.049 [2024-07-15 07:39:53.315238] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:31:15.049 [2024-07-15 07:39:53.315255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.049 [2024-07-15 07:39:53.315269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.049 [2024-07-15 07:39:53.315287] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:31:15.049 [2024-07-15 07:39:53.315301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.049 [2024-07-15 07:39:53.315318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.049 07:39:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:31:15.049 07:39:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:31:15.049 07:39:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:31:15.049 07:39:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:31:15.049 07:39:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:31:15.049 07:39:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:31:15.049 07:39:53 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.049 07:39:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:31:15.049 07:39:53 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.049 07:39:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:31:15.049 07:39:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:31:15.049 07:39:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:31:15.049 07:39:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:31:15.049 07:39:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:31:15.307 07:39:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:31:15.307 07:39:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:31:15.307 07:39:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:31:15.307 07:39:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:31:15.307 07:39:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:31:15.307 07:39:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:31:15.307 07:39:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:31:15.307 07:39:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:31:27.629 07:40:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:31:27.629 07:40:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:31:27.629 07:40:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:31:27.629 07:40:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:31:27.629 07:40:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:31:27.629 07:40:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:31:27.629 07:40:05 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.629 07:40:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:31:27.629 07:40:05 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.629 07:40:05 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:31:27.629 07:40:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:31:27.629 07:40:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:31:27.629 07:40:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:31:27.629 07:40:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:31:27.629 07:40:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:31:27.629 07:40:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:31:27.629 07:40:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:31:27.629 07:40:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:31:27.629 07:40:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:31:27.629 07:40:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:31:27.630 07:40:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:31:27.630 07:40:05 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.630 07:40:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:31:27.630 [2024-07-15 07:40:05.912428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:31:27.630 07:40:05 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.630 [2024-07-15 07:40:05.914866] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:31:27.630 [2024-07-15 07:40:05.914918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:31:27.630 [2024-07-15 07:40:05.914963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.630 [2024-07-15 07:40:05.914995] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:31:27.630 [2024-07-15 07:40:05.915014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:31:27.630 [2024-07-15 07:40:05.915029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.630 [2024-07-15 07:40:05.915048] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:31:27.630 [2024-07-15 07:40:05.915062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:31:27.630 [2024-07-15 07:40:05.915080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.630 [2024-07-15 07:40:05.915095] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:31:27.630 [2024-07-15 07:40:05.915111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:31:27.630 [2024-07-15 07:40:05.915127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.630 [2024-07-15 07:40:05.915150] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:31:27.630 [2024-07-15 07:40:05.915167] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:31:27.630 [2024-07-15 07:40:05.915198] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:31:27.630 [2024-07-15 07:40:05.915211] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:31:27.630 07:40:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:31:27.630 07:40:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:31:27.888 [2024-07-15 07:40:06.312464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:31:27.888 [2024-07-15 07:40:06.314906] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:31:27.888 [2024-07-15 07:40:06.314971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:31:27.888 [2024-07-15 07:40:06.314994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.888 [2024-07-15 07:40:06.315030] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:31:27.888 [2024-07-15 07:40:06.315047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:31:27.888 [2024-07-15 07:40:06.315077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.888 [2024-07-15 07:40:06.315093] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:31:27.888 [2024-07-15 07:40:06.315110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:31:27.888 [2024-07-15 07:40:06.315125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.888 [2024-07-15 07:40:06.315144] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:31:27.888 [2024-07-15 07:40:06.315158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:31:27.888 [2024-07-15 07:40:06.315175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.888 07:40:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:31:27.888 07:40:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:31:27.888 07:40:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:31:27.888 07:40:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:31:27.888 07:40:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:31:27.888 07:40:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:31:27.888 07:40:06 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.888 07:40:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:31:27.888 07:40:06 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.888 07:40:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:31:27.888 07:40:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:31:28.156 07:40:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:31:28.156 07:40:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:31:28.156 07:40:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:31:28.156 07:40:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:31:28.156 07:40:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:31:28.156 07:40:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:31:28.156 07:40:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:31:28.156 07:40:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:31:28.156 07:40:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:31:28.430 07:40:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:31:28.430 07:40:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:31:40.686 07:40:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:31:40.686 07:40:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:31:40.686 07:40:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:31:40.686 07:40:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:31:40.686 07:40:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:31:40.686 07:40:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:31:40.686 07:40:18 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.686 07:40:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:31:40.686 07:40:18 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.686 07:40:18 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:31:40.686 07:40:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:31:40.686 07:40:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:31:40.686 07:40:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:31:40.686 07:40:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:31:40.686 07:40:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:31:40.686 07:40:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:31:40.686 07:40:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:31:40.686 07:40:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:31:40.686 07:40:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:31:40.686 07:40:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:31:40.686 07:40:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:31:40.686 07:40:18 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.686 07:40:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:31:40.686 [2024-07-15 07:40:18.912728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:31:40.686 [2024-07-15 07:40:18.917371] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:31:40.686 [2024-07-15 07:40:18.917660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.686 [2024-07-15 07:40:18.917967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.686 [2024-07-15 07:40:18.918216] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:31:40.686 [2024-07-15 07:40:18.918427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.686 [2024-07-15 07:40:18.918652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.686 [2024-07-15 07:40:18.918924] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:31:40.686 [2024-07-15 07:40:18.919105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.686 [2024-07-15 07:40:18.919330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.686 [2024-07-15 07:40:18.919578] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:31:40.686 [2024-07-15 07:40:18.919784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.686 [2024-07-15 07:40:18.920067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.686 07:40:18 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.686 07:40:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:31:40.686 07:40:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:31:40.945 [2024-07-15 07:40:19.312711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:31:40.945 [2024-07-15 07:40:19.315440] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:31:40.945 [2024-07-15 07:40:19.315655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.945 [2024-07-15 07:40:19.315820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.945 [2024-07-15 07:40:19.315997] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:31:40.945 [2024-07-15 07:40:19.316135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.945 [2024-07-15 07:40:19.316298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.945 [2024-07-15 07:40:19.316494] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:31:40.946 [2024-07-15 07:40:19.316622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.946 [2024-07-15 07:40:19.316776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.946 [2024-07-15 07:40:19.317016] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:31:40.946 [2024-07-15 07:40:19.317071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.946 [2024-07-15 07:40:19.317229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.946 07:40:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:31:40.946 07:40:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:31:40.946 07:40:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:31:40.946 07:40:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:31:40.946 07:40:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:31:40.946 07:40:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:31:40.946 07:40:19 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.946 07:40:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:31:40.946 07:40:19 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.946 07:40:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:31:40.946 07:40:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:31:41.204 07:40:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:31:41.204 07:40:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:31:41.204 07:40:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:31:41.204 07:40:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:31:41.204 07:40:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:31:41.204 07:40:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:31:41.204 07:40:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:31:41.204 07:40:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:31:41.204 07:40:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:31:41.204 07:40:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:31:41.204 07:40:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:31:53.404 07:40:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:31:53.404 07:40:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:31:53.404 07:40:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:31:53.404 07:40:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:31:53.404 07:40:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:31:53.404 07:40:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:31:53.404 07:40:31 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.404 07:40:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:31:53.404 07:40:31 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.404 07:40:31 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:31:53.404 07:40:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:31:53.404 07:40:31 sw_hotplug -- common/autotest_common.sh@715 -- # time=45.04 00:31:53.404 07:40:31 sw_hotplug -- common/autotest_common.sh@716 -- # echo 45.04 00:31:53.404 07:40:31 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:31:53.404 07:40:31 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.04 00:31:53.404 07:40:31 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.04 2 00:31:53.404 remove_attach_helper took 45.04s to complete (handling 2 nvme drive(s)) 07:40:31 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:31:53.404 07:40:31 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 74217 00:31:53.404 07:40:31 sw_hotplug -- common/autotest_common.sh@948 -- # '[' -z 74217 ']' 00:31:53.404 07:40:31 sw_hotplug -- common/autotest_common.sh@952 -- # kill -0 74217 00:31:53.404 07:40:31 sw_hotplug -- common/autotest_common.sh@953 -- # uname 00:31:53.404 07:40:31 sw_hotplug -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:53.404 07:40:31 sw_hotplug -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74217 00:31:53.404 killing process with pid 74217 00:31:53.404 07:40:31 sw_hotplug -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:53.404 07:40:31 sw_hotplug -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:53.404 07:40:31 sw_hotplug -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74217' 00:31:53.404 07:40:31 sw_hotplug -- common/autotest_common.sh@967 -- # kill 74217 00:31:53.404 07:40:31 sw_hotplug -- common/autotest_common.sh@972 -- # wait 74217 00:31:55.974 07:40:34 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:56.232 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:56.796 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:31:56.796 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:31:56.796 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:31:57.054 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:31:57.054 00:31:57.054 real 2m32.091s 00:31:57.054 user 1m53.160s 00:31:57.054 sys 0m18.744s 00:31:57.054 ************************************ 00:31:57.054 END TEST sw_hotplug 00:31:57.054 ************************************ 00:31:57.054 07:40:35 sw_hotplug -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:57.054 07:40:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:31:57.054 07:40:35 -- common/autotest_common.sh@1142 -- # return 0 00:31:57.054 07:40:35 -- spdk/autotest.sh@247 -- # [[ 1 -eq 1 ]] 00:31:57.054 07:40:35 -- spdk/autotest.sh@248 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:31:57.054 07:40:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:57.054 07:40:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:57.054 07:40:35 -- common/autotest_common.sh@10 -- # set +x 00:31:57.054 ************************************ 00:31:57.054 START TEST nvme_xnvme 00:31:57.054 ************************************ 00:31:57.054 07:40:35 nvme_xnvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:31:57.054 * Looking for test storage... 00:31:57.054 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:31:57.054 07:40:35 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:57.054 07:40:35 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:57.054 07:40:35 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:57.054 07:40:35 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:57.055 07:40:35 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.055 07:40:35 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.055 07:40:35 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.055 07:40:35 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:31:57.055 07:40:35 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.055 07:40:35 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:31:57.055 07:40:35 nvme_xnvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:57.055 07:40:35 nvme_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:57.055 07:40:35 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:31:57.055 ************************************ 00:31:57.055 START TEST xnvme_to_malloc_dd_copy 00:31:57.055 ************************************ 00:31:57.055 07:40:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1123 -- # malloc_to_xnvme_copy 00:31:57.055 07:40:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:31:57.055 07:40:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:31:57.055 07:40:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:31:57.312 07:40:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # return 00:31:57.312 07:40:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:31:57.312 07:40:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:31:57.312 07:40:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:31:57.312 07:40:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:31:57.312 07:40:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:31:57.312 07:40:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:31:57.312 07:40:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:31:57.312 07:40:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:31:57.312 07:40:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:31:57.312 07:40:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:31:57.312 07:40:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:31:57.312 07:40:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:31:57.312 07:40:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:31:57.312 07:40:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:31:57.312 07:40:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:31:57.312 07:40:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:31:57.312 07:40:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:31:57.312 07:40:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:31:57.312 07:40:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:31:57.312 07:40:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:31:57.312 { 00:31:57.312 "subsystems": [ 00:31:57.312 { 00:31:57.312 "subsystem": "bdev", 00:31:57.312 "config": [ 00:31:57.312 { 00:31:57.312 "params": { 00:31:57.312 "block_size": 512, 00:31:57.312 "num_blocks": 2097152, 00:31:57.312 "name": "malloc0" 00:31:57.312 }, 00:31:57.312 "method": "bdev_malloc_create" 00:31:57.312 }, 00:31:57.312 { 00:31:57.312 "params": { 00:31:57.312 "io_mechanism": "libaio", 00:31:57.312 "filename": "/dev/nullb0", 00:31:57.312 "name": "null0" 00:31:57.312 }, 00:31:57.312 "method": "bdev_xnvme_create" 00:31:57.312 }, 00:31:57.312 { 00:31:57.312 "method": "bdev_wait_for_examine" 00:31:57.312 } 00:31:57.312 ] 00:31:57.312 } 00:31:57.312 ] 00:31:57.312 } 00:31:57.312 [2024-07-15 07:40:35.792182] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:31:57.312 [2024-07-15 07:40:35.792736] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75546 ] 00:31:57.570 [2024-07-15 07:40:35.974174] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:57.828 [2024-07-15 07:40:36.335298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:10.602  Copying: 147/1024 [MB] (147 MBps) Copying: 294/1024 [MB] (147 MBps) Copying: 442/1024 [MB] (148 MBps) Copying: 593/1024 [MB] (150 MBps) Copying: 743/1024 [MB] (149 MBps) Copying: 889/1024 [MB] (146 MBps) Copying: 1024/1024 [MB] (average 148 MBps) 00:32:10.602 00:32:10.602 07:40:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:32:10.602 07:40:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:32:10.602 07:40:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:32:10.602 07:40:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:32:10.602 { 00:32:10.602 "subsystems": [ 00:32:10.602 { 00:32:10.602 "subsystem": "bdev", 00:32:10.602 "config": [ 00:32:10.602 { 00:32:10.602 "params": { 00:32:10.602 "block_size": 512, 00:32:10.602 "num_blocks": 2097152, 00:32:10.602 "name": "malloc0" 00:32:10.602 }, 00:32:10.602 "method": "bdev_malloc_create" 00:32:10.602 }, 00:32:10.602 { 00:32:10.602 "params": { 00:32:10.602 "io_mechanism": "libaio", 00:32:10.602 "filename": "/dev/nullb0", 00:32:10.602 "name": "null0" 00:32:10.602 }, 00:32:10.602 "method": "bdev_xnvme_create" 00:32:10.602 }, 00:32:10.602 { 00:32:10.602 "method": "bdev_wait_for_examine" 00:32:10.602 } 00:32:10.602 ] 00:32:10.602 } 00:32:10.602 ] 00:32:10.602 } 00:32:10.602 [2024-07-15 07:40:49.160584] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:32:10.602 [2024-07-15 07:40:49.160857] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75696 ] 00:32:10.859 [2024-07-15 07:40:49.337709] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:11.116 [2024-07-15 07:40:49.610747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.240  Copying: 161/1024 [MB] (161 MBps) Copying: 320/1024 [MB] (159 MBps) Copying: 480/1024 [MB] (159 MBps) Copying: 639/1024 [MB] (159 MBps) Copying: 799/1024 [MB] (159 MBps) Copying: 955/1024 [MB] (156 MBps) Copying: 1024/1024 [MB] (average 159 MBps) 00:32:23.240 00:32:23.498 07:41:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:32:23.498 07:41:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:32:23.498 07:41:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:32:23.498 07:41:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:32:23.498 07:41:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:32:23.498 07:41:01 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:32:23.498 { 00:32:23.498 "subsystems": [ 00:32:23.498 { 00:32:23.498 "subsystem": "bdev", 00:32:23.498 "config": [ 00:32:23.498 { 00:32:23.498 "params": { 00:32:23.498 "block_size": 512, 00:32:23.498 "num_blocks": 2097152, 00:32:23.498 "name": "malloc0" 00:32:23.498 }, 00:32:23.498 "method": "bdev_malloc_create" 00:32:23.498 }, 00:32:23.498 { 00:32:23.498 "params": { 00:32:23.498 "io_mechanism": "io_uring", 00:32:23.498 "filename": "/dev/nullb0", 00:32:23.498 "name": "null0" 00:32:23.498 }, 00:32:23.498 "method": "bdev_xnvme_create" 00:32:23.498 }, 00:32:23.498 { 00:32:23.498 "method": "bdev_wait_for_examine" 00:32:23.498 } 00:32:23.498 ] 00:32:23.498 } 00:32:23.498 ] 00:32:23.498 } 00:32:23.498 [2024-07-15 07:41:01.977475] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:32:23.498 [2024-07-15 07:41:01.977672] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75835 ] 00:32:23.757 [2024-07-15 07:41:02.160220] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.015 [2024-07-15 07:41:02.481712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:35.877  Copying: 163/1024 [MB] (163 MBps) Copying: 327/1024 [MB] (163 MBps) Copying: 490/1024 [MB] (163 MBps) Copying: 663/1024 [MB] (172 MBps) Copying: 831/1024 [MB] (167 MBps) Copying: 991/1024 [MB] (159 MBps) Copying: 1024/1024 [MB] (average 165 MBps) 00:32:35.877 00:32:35.877 07:41:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:32:35.877 07:41:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:32:35.877 07:41:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:32:35.877 07:41:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:32:35.877 { 00:32:35.877 "subsystems": [ 00:32:35.877 { 00:32:35.877 "subsystem": "bdev", 00:32:35.877 "config": [ 00:32:35.877 { 00:32:35.877 "params": { 00:32:35.877 "block_size": 512, 00:32:35.877 "num_blocks": 2097152, 00:32:35.877 "name": "malloc0" 00:32:35.877 }, 00:32:35.877 "method": "bdev_malloc_create" 00:32:35.877 }, 00:32:35.877 { 00:32:35.877 "params": { 00:32:35.877 "io_mechanism": "io_uring", 00:32:35.878 "filename": "/dev/nullb0", 00:32:35.878 "name": "null0" 00:32:35.878 }, 00:32:35.878 "method": "bdev_xnvme_create" 00:32:35.878 }, 00:32:35.878 { 00:32:35.878 "method": "bdev_wait_for_examine" 00:32:35.878 } 00:32:35.878 ] 00:32:35.878 } 00:32:35.878 ] 00:32:35.878 } 00:32:35.878 [2024-07-15 07:41:14.480482] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:32:35.878 [2024-07-15 07:41:14.480682] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75972 ] 00:32:36.137 [2024-07-15 07:41:14.663189] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.396 [2024-07-15 07:41:14.941044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:48.211  Copying: 170/1024 [MB] (170 MBps) Copying: 345/1024 [MB] (174 MBps) Copying: 518/1024 [MB] (173 MBps) Copying: 689/1024 [MB] (170 MBps) Copying: 861/1024 [MB] (171 MBps) Copying: 1024/1024 [MB] (average 172 MBps) 00:32:48.211 00:32:48.211 07:41:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:32:48.211 07:41:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@195 -- # modprobe -r null_blk 00:32:48.211 ************************************ 00:32:48.211 END TEST xnvme_to_malloc_dd_copy 00:32:48.211 ************************************ 00:32:48.211 00:32:48.211 real 0m51.001s 00:32:48.211 user 0m43.698s 00:32:48.211 sys 0m6.649s 00:32:48.211 07:41:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:48.211 07:41:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:32:48.211 07:41:26 nvme_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:32:48.211 07:41:26 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:32:48.211 07:41:26 nvme_xnvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:48.211 07:41:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:48.211 07:41:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:32:48.211 ************************************ 00:32:48.211 START TEST xnvme_bdevperf 00:32:48.211 ************************************ 00:32:48.211 07:41:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1123 -- # xnvme_bdevperf 00:32:48.211 07:41:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:32:48.211 07:41:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:32:48.211 07:41:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:32:48.211 07:41:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # return 00:32:48.211 07:41:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:32:48.211 07:41:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:32:48.211 07:41:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:32:48.211 07:41:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:32:48.211 07:41:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:32:48.211 07:41:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:32:48.211 07:41:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:32:48.211 07:41:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:32:48.211 07:41:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:32:48.211 07:41:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:32:48.211 07:41:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:32:48.211 07:41:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:32:48.211 07:41:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:32:48.211 07:41:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:32:48.211 07:41:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:32:48.211 07:41:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:48.211 { 00:32:48.211 "subsystems": [ 00:32:48.211 { 00:32:48.211 "subsystem": "bdev", 00:32:48.211 "config": [ 00:32:48.211 { 00:32:48.211 "params": { 00:32:48.211 "io_mechanism": "libaio", 00:32:48.211 "filename": "/dev/nullb0", 00:32:48.211 "name": "null0" 00:32:48.211 }, 00:32:48.211 "method": "bdev_xnvme_create" 00:32:48.211 }, 00:32:48.211 { 00:32:48.211 "method": "bdev_wait_for_examine" 00:32:48.211 } 00:32:48.211 ] 00:32:48.211 } 00:32:48.211 ] 00:32:48.211 } 00:32:48.211 [2024-07-15 07:41:26.812068] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:32:48.211 [2024-07-15 07:41:26.812260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76132 ] 00:32:48.469 [2024-07-15 07:41:26.983632] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.034 [2024-07-15 07:41:27.344885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:49.292 Running I/O for 5 seconds... 00:32:54.555 00:32:54.555 Latency(us) 00:32:54.555 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:54.555 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:32:54.555 null0 : 5.00 115395.77 450.76 0.00 0.00 551.22 186.18 1191.56 00:32:54.555 =================================================================================================================== 00:32:54.555 Total : 115395.77 450.76 0.00 0.00 551.22 186.18 1191.56 00:32:55.929 07:41:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:32:55.929 07:41:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:32:55.929 07:41:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:32:55.929 07:41:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:32:55.929 07:41:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:32:55.929 07:41:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:55.929 { 00:32:55.929 "subsystems": [ 00:32:55.929 { 00:32:55.929 "subsystem": "bdev", 00:32:55.929 "config": [ 00:32:55.929 { 00:32:55.929 "params": { 00:32:55.929 "io_mechanism": "io_uring", 00:32:55.929 "filename": "/dev/nullb0", 00:32:55.929 "name": "null0" 00:32:55.929 }, 00:32:55.929 "method": "bdev_xnvme_create" 00:32:55.929 }, 00:32:55.929 { 00:32:55.929 "method": "bdev_wait_for_examine" 00:32:55.929 } 00:32:55.929 ] 00:32:55.929 } 00:32:55.929 ] 00:32:55.929 } 00:32:55.929 [2024-07-15 07:41:34.206156] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:32:55.929 [2024-07-15 07:41:34.206371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76217 ] 00:32:55.929 [2024-07-15 07:41:34.377579] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:56.187 [2024-07-15 07:41:34.722641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:56.753 Running I/O for 5 seconds... 00:33:02.079 00:33:02.079 Latency(us) 00:33:02.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:02.079 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:33:02.079 null0 : 5.00 151456.92 591.63 0.00 0.00 419.23 225.28 1087.30 00:33:02.079 =================================================================================================================== 00:33:02.079 Total : 151456.92 591.63 0.00 0.00 419.23 225.28 1087.30 00:33:03.014 07:41:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:33:03.014 07:41:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@195 -- # modprobe -r null_blk 00:33:03.014 00:33:03.014 real 0m14.754s 00:33:03.014 user 0m11.424s 00:33:03.014 sys 0m3.096s 00:33:03.014 07:41:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:03.014 07:41:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:03.014 ************************************ 00:33:03.014 END TEST xnvme_bdevperf 00:33:03.014 ************************************ 00:33:03.014 07:41:41 nvme_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:33:03.014 00:33:03.014 real 1m5.942s 00:33:03.014 user 0m55.186s 00:33:03.014 sys 0m9.857s 00:33:03.014 07:41:41 nvme_xnvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:03.014 07:41:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:33:03.014 ************************************ 00:33:03.014 END TEST nvme_xnvme 00:33:03.014 ************************************ 00:33:03.014 07:41:41 -- common/autotest_common.sh@1142 -- # return 0 00:33:03.014 07:41:41 -- spdk/autotest.sh@249 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:33:03.014 07:41:41 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:03.014 07:41:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:03.014 07:41:41 -- common/autotest_common.sh@10 -- # set +x 00:33:03.014 ************************************ 00:33:03.014 START TEST blockdev_xnvme 00:33:03.014 ************************************ 00:33:03.014 07:41:41 blockdev_xnvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:33:03.015 * Looking for test storage... 00:33:03.015 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:33:03.015 07:41:41 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:33:03.015 07:41:41 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:33:03.015 07:41:41 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:33:03.015 07:41:41 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:33:03.015 07:41:41 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:33:03.015 07:41:41 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:33:03.015 07:41:41 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:33:03.015 07:41:41 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:33:03.015 07:41:41 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:33:03.015 07:41:41 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:33:03.015 07:41:41 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:33:03.015 07:41:41 blockdev_xnvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:33:03.015 07:41:41 blockdev_xnvme -- bdev/blockdev.sh@674 -- # uname -s 00:33:03.273 07:41:41 blockdev_xnvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:33:03.273 07:41:41 blockdev_xnvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:33:03.273 07:41:41 blockdev_xnvme -- bdev/blockdev.sh@682 -- # test_type=xnvme 00:33:03.273 07:41:41 blockdev_xnvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:33:03.273 07:41:41 blockdev_xnvme -- bdev/blockdev.sh@684 -- # dek= 00:33:03.273 07:41:41 blockdev_xnvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:33:03.273 07:41:41 blockdev_xnvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:33:03.273 07:41:41 blockdev_xnvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:33:03.273 07:41:41 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == bdev ]] 00:33:03.273 07:41:41 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == crypto_* ]] 00:33:03.273 07:41:41 blockdev_xnvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:33:03.273 07:41:41 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=76366 00:33:03.273 07:41:41 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:33:03.273 07:41:41 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 76366 00:33:03.273 07:41:41 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:33:03.273 07:41:41 blockdev_xnvme -- common/autotest_common.sh@829 -- # '[' -z 76366 ']' 00:33:03.273 07:41:41 blockdev_xnvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:03.273 07:41:41 blockdev_xnvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:03.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:03.273 07:41:41 blockdev_xnvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:03.273 07:41:41 blockdev_xnvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:03.273 07:41:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:33:03.273 [2024-07-15 07:41:41.776826] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:33:03.273 [2024-07-15 07:41:41.777020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76366 ] 00:33:03.531 [2024-07-15 07:41:41.956625] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:03.790 [2024-07-15 07:41:42.232074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:04.725 07:41:43 blockdev_xnvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:04.725 07:41:43 blockdev_xnvme -- common/autotest_common.sh@862 -- # return 0 00:33:04.725 07:41:43 blockdev_xnvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:33:04.725 07:41:43 blockdev_xnvme -- bdev/blockdev.sh@729 -- # setup_xnvme_conf 00:33:04.725 07:41:43 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:33:04.725 07:41:43 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:33:04.725 07:41:43 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:04.981 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:05.239 Waiting for block devices as requested 00:33:05.239 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:05.239 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:05.497 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:33:05.497 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:33:10.773 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:33:10.773 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:33:10.773 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:33:10.773 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:33:10.773 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1670 -- # local nvme bdf 00:33:10.773 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:33:10.773 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:33:10.773 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:33:10.773 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:10.773 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:33:10.773 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:33:10.773 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:33:10.773 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:33:10.773 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:33:10.773 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:33:10.773 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:33:10.773 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:33:10.773 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:33:10.773 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:33:10.773 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:33:10.773 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:33:10.773 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:33:10.773 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:33:10.773 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:33:10.773 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:33:10.773 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:33:10.773 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:33:10.773 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:33:10.773 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:33:10.773 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:33:10.773 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:33:10.773 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:33:10.773 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:33:10.773 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:33:10.774 nvme0n1 00:33:10.774 nvme1n1 00:33:10.774 nvme2n1 00:33:10.774 nvme2n2 00:33:10.774 nvme2n3 00:33:10.774 nvme3n1 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@740 -- # cat 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "5ca8bf42-2cc8-4ffc-bb84-bdd7c9cd3d9a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "5ca8bf42-2cc8-4ffc-bb84-bdd7c9cd3d9a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "b74770ec-f5a4-474c-bd7b-58b818562183"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b74770ec-f5a4-474c-bd7b-58b818562183",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "49a8af11-3b6c-4f72-b70c-5c2d8250be8a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "49a8af11-3b6c-4f72-b70c-5c2d8250be8a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "485503eb-af5d-4880-abf8-b8450f55221f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "485503eb-af5d-4880-abf8-b8450f55221f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "b2554b6c-bff3-4f44-88dd-1adb6ecaf4e8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b2554b6c-bff3-4f44-88dd-1adb6ecaf4e8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "1c95daa7-9a2b-4e1f-b5c5-2c80db2c4070"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1c95daa7-9a2b-4e1f-b5c5-2c80db2c4070",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=nvme0n1 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:33:10.774 07:41:49 blockdev_xnvme -- bdev/blockdev.sh@754 -- # killprocess 76366 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@948 -- # '[' -z 76366 ']' 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@952 -- # kill -0 76366 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@953 -- # uname 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76366 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:10.774 killing process with pid 76366 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76366' 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@967 -- # kill 76366 00:33:10.774 07:41:49 blockdev_xnvme -- common/autotest_common.sh@972 -- # wait 76366 00:33:13.303 07:41:51 blockdev_xnvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:13.303 07:41:51 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:33:13.303 07:41:51 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:33:13.303 07:41:51 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:13.303 07:41:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:33:13.303 ************************************ 00:33:13.303 START TEST bdev_hello_world 00:33:13.303 ************************************ 00:33:13.304 07:41:51 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:33:13.561 [2024-07-15 07:41:51.978504] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:33:13.561 [2024-07-15 07:41:51.978691] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76739 ] 00:33:13.561 [2024-07-15 07:41:52.148166] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.819 [2024-07-15 07:41:52.425695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:14.384 [2024-07-15 07:41:52.917260] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:33:14.384 [2024-07-15 07:41:52.917345] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:33:14.384 [2024-07-15 07:41:52.917376] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:33:14.384 [2024-07-15 07:41:52.920071] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:33:14.384 [2024-07-15 07:41:52.920501] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:33:14.385 [2024-07-15 07:41:52.920530] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:33:14.385 [2024-07-15 07:41:52.920730] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:33:14.385 00:33:14.385 [2024-07-15 07:41:52.920759] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:33:15.756 00:33:15.756 real 0m2.358s 00:33:15.756 user 0m1.890s 00:33:15.756 sys 0m0.350s 00:33:15.756 07:41:54 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:15.756 07:41:54 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:33:15.756 ************************************ 00:33:15.756 END TEST bdev_hello_world 00:33:15.756 ************************************ 00:33:15.756 07:41:54 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:33:15.756 07:41:54 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:33:15.756 07:41:54 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:15.756 07:41:54 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:15.756 07:41:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:33:15.756 ************************************ 00:33:15.756 START TEST bdev_bounds 00:33:15.756 ************************************ 00:33:15.756 07:41:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:33:15.756 07:41:54 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=76781 00:33:15.756 07:41:54 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:33:15.757 07:41:54 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:33:15.757 Process bdevio pid: 76781 00:33:15.757 07:41:54 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 76781' 00:33:15.757 07:41:54 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 76781 00:33:15.757 07:41:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 76781 ']' 00:33:15.757 07:41:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:15.757 07:41:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:15.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:15.757 07:41:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:15.757 07:41:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:15.757 07:41:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:33:16.013 [2024-07-15 07:41:54.414630] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:33:16.013 [2024-07-15 07:41:54.414851] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76781 ] 00:33:16.013 [2024-07-15 07:41:54.590965] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:16.272 [2024-07-15 07:41:54.876536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:16.272 [2024-07-15 07:41:54.876608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:16.272 [2024-07-15 07:41:54.876616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:16.862 07:41:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:16.862 07:41:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:33:16.862 07:41:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:33:17.119 I/O targets: 00:33:17.119 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:33:17.119 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:33:17.119 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:33:17.119 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:33:17.119 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:33:17.119 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:33:17.119 00:33:17.119 00:33:17.119 CUnit - A unit testing framework for C - Version 2.1-3 00:33:17.119 http://cunit.sourceforge.net/ 00:33:17.119 00:33:17.119 00:33:17.119 Suite: bdevio tests on: nvme3n1 00:33:17.119 Test: blockdev write read block ...passed 00:33:17.119 Test: blockdev write zeroes read block ...passed 00:33:17.119 Test: blockdev write zeroes read no split ...passed 00:33:17.119 Test: blockdev write zeroes read split ...passed 00:33:17.119 Test: blockdev write zeroes read split partial ...passed 00:33:17.119 Test: blockdev reset ...passed 00:33:17.119 Test: blockdev write read 8 blocks ...passed 00:33:17.119 Test: blockdev write read size > 128k ...passed 00:33:17.119 Test: blockdev write read invalid size ...passed 00:33:17.119 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:17.119 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:17.119 Test: blockdev write read max offset ...passed 00:33:17.119 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:17.119 Test: blockdev writev readv 8 blocks ...passed 00:33:17.119 Test: blockdev writev readv 30 x 1block ...passed 00:33:17.119 Test: blockdev writev readv block ...passed 00:33:17.119 Test: blockdev writev readv size > 128k ...passed 00:33:17.119 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:17.119 Test: blockdev comparev and writev ...passed 00:33:17.119 Test: blockdev nvme passthru rw ...passed 00:33:17.119 Test: blockdev nvme passthru vendor specific ...passed 00:33:17.119 Test: blockdev nvme admin passthru ...passed 00:33:17.119 Test: blockdev copy ...passed 00:33:17.119 Suite: bdevio tests on: nvme2n3 00:33:17.119 Test: blockdev write read block ...passed 00:33:17.119 Test: blockdev write zeroes read block ...passed 00:33:17.119 Test: blockdev write zeroes read no split ...passed 00:33:17.119 Test: blockdev write zeroes read split ...passed 00:33:17.119 Test: blockdev write zeroes read split partial ...passed 00:33:17.119 Test: blockdev reset ...passed 00:33:17.119 Test: blockdev write read 8 blocks ...passed 00:33:17.119 Test: blockdev write read size > 128k ...passed 00:33:17.119 Test: blockdev write read invalid size ...passed 00:33:17.119 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:17.119 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:17.119 Test: blockdev write read max offset ...passed 00:33:17.119 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:17.119 Test: blockdev writev readv 8 blocks ...passed 00:33:17.119 Test: blockdev writev readv 30 x 1block ...passed 00:33:17.119 Test: blockdev writev readv block ...passed 00:33:17.119 Test: blockdev writev readv size > 128k ...passed 00:33:17.119 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:17.119 Test: blockdev comparev and writev ...passed 00:33:17.119 Test: blockdev nvme passthru rw ...passed 00:33:17.119 Test: blockdev nvme passthru vendor specific ...passed 00:33:17.119 Test: blockdev nvme admin passthru ...passed 00:33:17.119 Test: blockdev copy ...passed 00:33:17.119 Suite: bdevio tests on: nvme2n2 00:33:17.119 Test: blockdev write read block ...passed 00:33:17.119 Test: blockdev write zeroes read block ...passed 00:33:17.119 Test: blockdev write zeroes read no split ...passed 00:33:17.119 Test: blockdev write zeroes read split ...passed 00:33:17.378 Test: blockdev write zeroes read split partial ...passed 00:33:17.378 Test: blockdev reset ...passed 00:33:17.378 Test: blockdev write read 8 blocks ...passed 00:33:17.378 Test: blockdev write read size > 128k ...passed 00:33:17.378 Test: blockdev write read invalid size ...passed 00:33:17.378 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:17.378 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:17.378 Test: blockdev write read max offset ...passed 00:33:17.378 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:17.378 Test: blockdev writev readv 8 blocks ...passed 00:33:17.378 Test: blockdev writev readv 30 x 1block ...passed 00:33:17.378 Test: blockdev writev readv block ...passed 00:33:17.378 Test: blockdev writev readv size > 128k ...passed 00:33:17.378 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:17.378 Test: blockdev comparev and writev ...passed 00:33:17.378 Test: blockdev nvme passthru rw ...passed 00:33:17.378 Test: blockdev nvme passthru vendor specific ...passed 00:33:17.378 Test: blockdev nvme admin passthru ...passed 00:33:17.378 Test: blockdev copy ...passed 00:33:17.378 Suite: bdevio tests on: nvme2n1 00:33:17.378 Test: blockdev write read block ...passed 00:33:17.378 Test: blockdev write zeroes read block ...passed 00:33:17.378 Test: blockdev write zeroes read no split ...passed 00:33:17.378 Test: blockdev write zeroes read split ...passed 00:33:17.378 Test: blockdev write zeroes read split partial ...passed 00:33:17.378 Test: blockdev reset ...passed 00:33:17.378 Test: blockdev write read 8 blocks ...passed 00:33:17.378 Test: blockdev write read size > 128k ...passed 00:33:17.378 Test: blockdev write read invalid size ...passed 00:33:17.378 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:17.378 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:17.378 Test: blockdev write read max offset ...passed 00:33:17.378 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:17.378 Test: blockdev writev readv 8 blocks ...passed 00:33:17.378 Test: blockdev writev readv 30 x 1block ...passed 00:33:17.378 Test: blockdev writev readv block ...passed 00:33:17.378 Test: blockdev writev readv size > 128k ...passed 00:33:17.378 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:17.378 Test: blockdev comparev and writev ...passed 00:33:17.378 Test: blockdev nvme passthru rw ...passed 00:33:17.378 Test: blockdev nvme passthru vendor specific ...passed 00:33:17.378 Test: blockdev nvme admin passthru ...passed 00:33:17.378 Test: blockdev copy ...passed 00:33:17.378 Suite: bdevio tests on: nvme1n1 00:33:17.378 Test: blockdev write read block ...passed 00:33:17.378 Test: blockdev write zeroes read block ...passed 00:33:17.378 Test: blockdev write zeroes read no split ...passed 00:33:17.378 Test: blockdev write zeroes read split ...passed 00:33:17.378 Test: blockdev write zeroes read split partial ...passed 00:33:17.378 Test: blockdev reset ...passed 00:33:17.378 Test: blockdev write read 8 blocks ...passed 00:33:17.378 Test: blockdev write read size > 128k ...passed 00:33:17.378 Test: blockdev write read invalid size ...passed 00:33:17.378 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:17.378 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:17.378 Test: blockdev write read max offset ...passed 00:33:17.378 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:17.378 Test: blockdev writev readv 8 blocks ...passed 00:33:17.378 Test: blockdev writev readv 30 x 1block ...passed 00:33:17.378 Test: blockdev writev readv block ...passed 00:33:17.378 Test: blockdev writev readv size > 128k ...passed 00:33:17.378 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:17.378 Test: blockdev comparev and writev ...passed 00:33:17.378 Test: blockdev nvme passthru rw ...passed 00:33:17.378 Test: blockdev nvme passthru vendor specific ...passed 00:33:17.378 Test: blockdev nvme admin passthru ...passed 00:33:17.378 Test: blockdev copy ...passed 00:33:17.378 Suite: bdevio tests on: nvme0n1 00:33:17.378 Test: blockdev write read block ...passed 00:33:17.378 Test: blockdev write zeroes read block ...passed 00:33:17.378 Test: blockdev write zeroes read no split ...passed 00:33:17.378 Test: blockdev write zeroes read split ...passed 00:33:17.636 Test: blockdev write zeroes read split partial ...passed 00:33:17.636 Test: blockdev reset ...passed 00:33:17.636 Test: blockdev write read 8 blocks ...passed 00:33:17.636 Test: blockdev write read size > 128k ...passed 00:33:17.636 Test: blockdev write read invalid size ...passed 00:33:17.636 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:17.636 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:17.636 Test: blockdev write read max offset ...passed 00:33:17.636 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:17.636 Test: blockdev writev readv 8 blocks ...passed 00:33:17.636 Test: blockdev writev readv 30 x 1block ...passed 00:33:17.636 Test: blockdev writev readv block ...passed 00:33:17.636 Test: blockdev writev readv size > 128k ...passed 00:33:17.636 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:17.636 Test: blockdev comparev and writev ...passed 00:33:17.636 Test: blockdev nvme passthru rw ...passed 00:33:17.636 Test: blockdev nvme passthru vendor specific ...passed 00:33:17.636 Test: blockdev nvme admin passthru ...passed 00:33:17.636 Test: blockdev copy ...passed 00:33:17.636 00:33:17.636 Run Summary: Type Total Ran Passed Failed Inactive 00:33:17.636 suites 6 6 n/a 0 0 00:33:17.636 tests 138 138 138 0 0 00:33:17.636 asserts 780 780 780 0 n/a 00:33:17.636 00:33:17.636 Elapsed time = 1.395 seconds 00:33:17.636 0 00:33:17.636 07:41:56 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 76781 00:33:17.636 07:41:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 76781 ']' 00:33:17.636 07:41:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 76781 00:33:17.636 07:41:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:33:17.636 07:41:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:17.636 07:41:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76781 00:33:17.636 07:41:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:17.636 07:41:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:17.636 07:41:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76781' 00:33:17.636 killing process with pid 76781 00:33:17.636 07:41:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 76781 00:33:17.636 07:41:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 76781 00:33:19.007 07:41:57 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:33:19.007 00:33:19.007 real 0m3.077s 00:33:19.007 user 0m6.975s 00:33:19.007 sys 0m0.519s 00:33:19.007 07:41:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:19.007 07:41:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:33:19.007 ************************************ 00:33:19.007 END TEST bdev_bounds 00:33:19.007 ************************************ 00:33:19.007 07:41:57 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:33:19.007 07:41:57 blockdev_xnvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:33:19.007 07:41:57 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:33:19.007 07:41:57 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:19.007 07:41:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:33:19.007 ************************************ 00:33:19.007 START TEST bdev_nbd 00:33:19.007 ************************************ 00:33:19.007 07:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:33:19.007 07:41:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:33:19.007 07:41:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:33:19.007 07:41:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:19.007 07:41:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:33:19.007 07:41:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:33:19.007 07:41:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:33:19.007 07:41:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:33:19.007 07:41:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:33:19.007 07:41:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:33:19.007 07:41:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:33:19.007 07:41:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:33:19.007 07:41:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:33:19.007 07:41:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:33:19.007 07:41:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:33:19.007 07:41:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:33:19.007 07:41:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=76856 00:33:19.007 07:41:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:33:19.007 07:41:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:33:19.007 07:41:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 76856 /var/tmp/spdk-nbd.sock 00:33:19.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:33:19.007 07:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 76856 ']' 00:33:19.007 07:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:33:19.007 07:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:19.007 07:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:33:19.007 07:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:19.007 07:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:33:19.007 [2024-07-15 07:41:57.546232] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:33:19.007 [2024-07-15 07:41:57.547085] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:19.330 [2024-07-15 07:41:57.722423] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:19.644 [2024-07-15 07:41:58.021734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:20.210 07:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:20.210 07:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:33:20.210 07:41:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:33:20.210 07:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:20.210 07:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:33:20.210 07:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:33:20.210 07:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:33:20.210 07:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:20.210 07:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:33:20.210 07:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:33:20.211 07:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:33:20.211 07:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:33:20.211 07:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:33:20.211 07:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:33:20.211 07:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:33:20.469 07:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:33:20.469 07:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:33:20.469 07:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:33:20.469 07:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:33:20.469 07:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:33:20.469 07:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:20.469 07:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:20.469 07:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:33:20.469 07:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:33:20.469 07:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:20.469 07:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:20.469 07:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:20.469 1+0 records in 00:33:20.469 1+0 records out 00:33:20.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000516051 s, 7.9 MB/s 00:33:20.469 07:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:20.469 07:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:33:20.469 07:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:20.469 07:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:20.469 07:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:33:20.469 07:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:33:20.469 07:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:33:20.469 07:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:33:20.728 07:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:33:20.728 07:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:33:20.728 07:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:33:20.728 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:33:20.728 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:33:20.728 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:20.728 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:20.728 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:33:20.728 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:33:20.728 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:20.728 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:20.728 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:20.728 1+0 records in 00:33:20.728 1+0 records out 00:33:20.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000451734 s, 9.1 MB/s 00:33:20.728 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:20.728 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:33:20.728 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:20.728 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:20.728 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:33:20.728 07:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:33:20.728 07:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:33:20.728 07:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:33:20.986 07:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:33:20.986 07:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:33:20.986 07:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:33:20.986 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:33:20.986 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:33:20.986 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:20.986 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:20.986 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:33:20.986 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:33:20.986 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:20.986 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:20.986 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:20.986 1+0 records in 00:33:20.986 1+0 records out 00:33:20.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403405 s, 10.2 MB/s 00:33:20.986 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:20.986 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:33:20.986 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:20.986 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:20.986 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:33:20.986 07:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:33:20.986 07:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:33:20.986 07:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:33:21.244 07:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:33:21.244 07:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:33:21.244 07:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:33:21.244 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:33:21.244 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:33:21.244 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:21.244 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:21.244 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:33:21.244 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:33:21.245 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:21.245 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:21.245 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:21.245 1+0 records in 00:33:21.245 1+0 records out 00:33:21.245 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000783876 s, 5.2 MB/s 00:33:21.245 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:21.245 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:33:21.245 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:21.245 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:21.245 07:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:33:21.245 07:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:33:21.245 07:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:33:21.245 07:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:33:21.503 07:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:33:21.503 07:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:33:21.503 07:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:33:21.503 07:42:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:33:21.503 07:42:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:33:21.503 07:42:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:21.503 07:42:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:21.503 07:42:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:33:21.503 07:42:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:33:21.503 07:42:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:21.503 07:42:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:21.503 07:42:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:21.503 1+0 records in 00:33:21.503 1+0 records out 00:33:21.503 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000703907 s, 5.8 MB/s 00:33:21.503 07:42:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:21.503 07:42:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:33:21.503 07:42:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:21.503 07:42:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:21.503 07:42:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:33:21.503 07:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:33:21.503 07:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:33:21.503 07:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:33:21.761 07:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:33:21.761 07:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:33:21.761 07:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:33:21.761 07:42:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:33:21.761 07:42:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:33:21.761 07:42:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:21.761 07:42:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:21.761 07:42:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:33:21.761 07:42:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:33:21.761 07:42:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:21.761 07:42:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:21.761 07:42:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:21.761 1+0 records in 00:33:21.761 1+0 records out 00:33:21.761 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000874665 s, 4.7 MB/s 00:33:21.761 07:42:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:21.761 07:42:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:33:21.761 07:42:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:21.761 07:42:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:21.761 07:42:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:33:21.761 07:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:33:21.762 07:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:33:21.762 07:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:22.020 07:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:33:22.020 { 00:33:22.020 "nbd_device": "/dev/nbd0", 00:33:22.020 "bdev_name": "nvme0n1" 00:33:22.020 }, 00:33:22.020 { 00:33:22.020 "nbd_device": "/dev/nbd1", 00:33:22.020 "bdev_name": "nvme1n1" 00:33:22.020 }, 00:33:22.020 { 00:33:22.020 "nbd_device": "/dev/nbd2", 00:33:22.020 "bdev_name": "nvme2n1" 00:33:22.020 }, 00:33:22.020 { 00:33:22.020 "nbd_device": "/dev/nbd3", 00:33:22.020 "bdev_name": "nvme2n2" 00:33:22.020 }, 00:33:22.020 { 00:33:22.020 "nbd_device": "/dev/nbd4", 00:33:22.020 "bdev_name": "nvme2n3" 00:33:22.020 }, 00:33:22.020 { 00:33:22.020 "nbd_device": "/dev/nbd5", 00:33:22.020 "bdev_name": "nvme3n1" 00:33:22.020 } 00:33:22.020 ]' 00:33:22.020 07:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:33:22.020 07:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:33:22.020 { 00:33:22.020 "nbd_device": "/dev/nbd0", 00:33:22.020 "bdev_name": "nvme0n1" 00:33:22.020 }, 00:33:22.020 { 00:33:22.020 "nbd_device": "/dev/nbd1", 00:33:22.020 "bdev_name": "nvme1n1" 00:33:22.020 }, 00:33:22.020 { 00:33:22.020 "nbd_device": "/dev/nbd2", 00:33:22.020 "bdev_name": "nvme2n1" 00:33:22.020 }, 00:33:22.020 { 00:33:22.020 "nbd_device": "/dev/nbd3", 00:33:22.020 "bdev_name": "nvme2n2" 00:33:22.020 }, 00:33:22.020 { 00:33:22.020 "nbd_device": "/dev/nbd4", 00:33:22.020 "bdev_name": "nvme2n3" 00:33:22.020 }, 00:33:22.020 { 00:33:22.020 "nbd_device": "/dev/nbd5", 00:33:22.020 "bdev_name": "nvme3n1" 00:33:22.020 } 00:33:22.020 ]' 00:33:22.020 07:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:33:22.279 07:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:33:22.279 07:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:22.279 07:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:33:22.279 07:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:22.279 07:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:33:22.279 07:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:22.279 07:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:22.537 07:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:22.537 07:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:22.537 07:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:22.537 07:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:22.537 07:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:22.537 07:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:22.537 07:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:22.537 07:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:22.537 07:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:22.537 07:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:33:22.795 07:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:22.795 07:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:22.795 07:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:22.795 07:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:22.795 07:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:22.795 07:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:22.795 07:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:22.795 07:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:22.795 07:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:22.795 07:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:33:23.087 07:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:33:23.087 07:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:33:23.087 07:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:33:23.087 07:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:23.087 07:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:23.087 07:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:33:23.087 07:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:23.087 07:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:23.087 07:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:23.087 07:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:33:23.364 07:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:33:23.364 07:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:33:23.364 07:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:33:23.364 07:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:23.364 07:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:23.364 07:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:33:23.364 07:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:23.364 07:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:23.364 07:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:23.364 07:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:33:23.622 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:33:23.622 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:33:23.622 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:33:23.622 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:23.622 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:23.622 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:33:23.622 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:23.622 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:23.622 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:23.622 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:33:23.880 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:33:23.880 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:33:23.880 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:33:23.880 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:23.880 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:23.880 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:33:23.880 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:23.880 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:23.880 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:23.880 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:23.880 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:24.138 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:24.138 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:24.138 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:24.138 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:24.138 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:33:24.138 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:24.138 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:33:24.138 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:33:24.138 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:33:24.138 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:33:24.138 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:33:24.138 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:33:24.138 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:33:24.138 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:24.138 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:33:24.138 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:33:24.138 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:33:24.138 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:33:24.138 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:33:24.138 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:24.138 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:33:24.138 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:24.138 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:33:24.138 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:24.138 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:33:24.138 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:24.138 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:33:24.138 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:33:24.397 /dev/nbd0 00:33:24.397 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:24.397 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:24.397 07:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:33:24.397 07:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:33:24.397 07:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:24.397 07:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:24.397 07:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:33:24.397 07:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:33:24.397 07:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:24.397 07:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:24.397 07:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:24.397 1+0 records in 00:33:24.397 1+0 records out 00:33:24.397 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468656 s, 8.7 MB/s 00:33:24.397 07:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:24.397 07:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:33:24.397 07:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:24.397 07:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:24.397 07:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:33:24.397 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:24.397 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:33:24.397 07:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:33:24.656 /dev/nbd1 00:33:24.914 07:42:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:24.914 07:42:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:24.914 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:33:24.914 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:33:24.914 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:24.914 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:24.914 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:33:24.914 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:33:24.915 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:24.915 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:24.915 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:24.915 1+0 records in 00:33:24.915 1+0 records out 00:33:24.915 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000728742 s, 5.6 MB/s 00:33:24.915 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:24.915 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:33:24.915 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:24.915 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:24.915 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:33:24.915 07:42:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:24.915 07:42:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:33:24.915 07:42:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:33:25.172 /dev/nbd10 00:33:25.172 07:42:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:33:25.172 07:42:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:33:25.172 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:33:25.172 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:33:25.172 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:25.172 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:25.172 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:33:25.172 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:33:25.172 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:25.172 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:25.172 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:25.172 1+0 records in 00:33:25.172 1+0 records out 00:33:25.172 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000571897 s, 7.2 MB/s 00:33:25.172 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:25.172 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:33:25.172 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:25.172 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:25.172 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:33:25.172 07:42:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:25.172 07:42:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:33:25.172 07:42:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:33:25.431 /dev/nbd11 00:33:25.431 07:42:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:33:25.431 07:42:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:33:25.431 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:33:25.431 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:33:25.431 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:25.431 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:25.431 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:33:25.431 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:33:25.431 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:25.431 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:25.431 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:25.431 1+0 records in 00:33:25.431 1+0 records out 00:33:25.431 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000904074 s, 4.5 MB/s 00:33:25.431 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:25.431 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:33:25.431 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:25.432 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:25.432 07:42:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:33:25.432 07:42:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:25.432 07:42:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:33:25.432 07:42:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:33:25.691 /dev/nbd12 00:33:25.691 07:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:33:25.691 07:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:33:25.691 07:42:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:33:25.691 07:42:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:33:25.691 07:42:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:25.691 07:42:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:25.691 07:42:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:33:25.691 07:42:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:33:25.691 07:42:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:25.691 07:42:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:25.691 07:42:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:25.691 1+0 records in 00:33:25.691 1+0 records out 00:33:25.691 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000480047 s, 8.5 MB/s 00:33:25.691 07:42:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:25.691 07:42:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:33:25.691 07:42:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:25.691 07:42:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:25.691 07:42:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:33:25.691 07:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:25.691 07:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:33:25.691 07:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:33:25.950 /dev/nbd13 00:33:25.950 07:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:33:25.950 07:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:33:25.950 07:42:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:33:25.950 07:42:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:33:25.950 07:42:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:25.950 07:42:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:25.950 07:42:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:33:25.950 07:42:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:33:25.950 07:42:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:25.950 07:42:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:25.950 07:42:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:25.950 1+0 records in 00:33:25.950 1+0 records out 00:33:25.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00084904 s, 4.8 MB/s 00:33:25.950 07:42:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:25.950 07:42:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:33:25.950 07:42:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:25.950 07:42:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:25.950 07:42:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:33:25.950 07:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:25.950 07:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:33:25.950 07:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:25.950 07:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:25.950 07:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:26.208 07:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:33:26.208 { 00:33:26.208 "nbd_device": "/dev/nbd0", 00:33:26.208 "bdev_name": "nvme0n1" 00:33:26.208 }, 00:33:26.208 { 00:33:26.208 "nbd_device": "/dev/nbd1", 00:33:26.208 "bdev_name": "nvme1n1" 00:33:26.208 }, 00:33:26.208 { 00:33:26.208 "nbd_device": "/dev/nbd10", 00:33:26.208 "bdev_name": "nvme2n1" 00:33:26.208 }, 00:33:26.208 { 00:33:26.208 "nbd_device": "/dev/nbd11", 00:33:26.208 "bdev_name": "nvme2n2" 00:33:26.208 }, 00:33:26.208 { 00:33:26.208 "nbd_device": "/dev/nbd12", 00:33:26.208 "bdev_name": "nvme2n3" 00:33:26.208 }, 00:33:26.208 { 00:33:26.208 "nbd_device": "/dev/nbd13", 00:33:26.208 "bdev_name": "nvme3n1" 00:33:26.208 } 00:33:26.208 ]' 00:33:26.208 07:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:33:26.208 { 00:33:26.208 "nbd_device": "/dev/nbd0", 00:33:26.208 "bdev_name": "nvme0n1" 00:33:26.208 }, 00:33:26.208 { 00:33:26.208 "nbd_device": "/dev/nbd1", 00:33:26.208 "bdev_name": "nvme1n1" 00:33:26.208 }, 00:33:26.208 { 00:33:26.208 "nbd_device": "/dev/nbd10", 00:33:26.208 "bdev_name": "nvme2n1" 00:33:26.208 }, 00:33:26.208 { 00:33:26.208 "nbd_device": "/dev/nbd11", 00:33:26.208 "bdev_name": "nvme2n2" 00:33:26.208 }, 00:33:26.208 { 00:33:26.208 "nbd_device": "/dev/nbd12", 00:33:26.208 "bdev_name": "nvme2n3" 00:33:26.208 }, 00:33:26.208 { 00:33:26.208 "nbd_device": "/dev/nbd13", 00:33:26.208 "bdev_name": "nvme3n1" 00:33:26.208 } 00:33:26.208 ]' 00:33:26.208 07:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:26.466 07:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:33:26.466 /dev/nbd1 00:33:26.466 /dev/nbd10 00:33:26.466 /dev/nbd11 00:33:26.466 /dev/nbd12 00:33:26.466 /dev/nbd13' 00:33:26.466 07:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:33:26.466 /dev/nbd1 00:33:26.466 /dev/nbd10 00:33:26.466 /dev/nbd11 00:33:26.466 /dev/nbd12 00:33:26.466 /dev/nbd13' 00:33:26.466 07:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:26.466 07:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:33:26.466 07:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:33:26.466 07:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:33:26.466 07:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:33:26.466 07:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:33:26.466 07:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:33:26.466 07:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:26.466 07:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:33:26.466 07:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:33:26.466 07:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:33:26.466 07:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:33:26.466 256+0 records in 00:33:26.466 256+0 records out 00:33:26.466 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106532 s, 98.4 MB/s 00:33:26.466 07:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:26.466 07:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:33:26.466 256+0 records in 00:33:26.466 256+0 records out 00:33:26.466 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149768 s, 7.0 MB/s 00:33:26.467 07:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:26.467 07:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:33:26.734 256+0 records in 00:33:26.734 256+0 records out 00:33:26.734 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.172023 s, 6.1 MB/s 00:33:26.734 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:26.734 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:33:26.734 256+0 records in 00:33:26.734 256+0 records out 00:33:26.734 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.160458 s, 6.5 MB/s 00:33:26.734 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:26.734 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:33:26.993 256+0 records in 00:33:26.993 256+0 records out 00:33:26.993 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15452 s, 6.8 MB/s 00:33:26.993 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:26.993 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:33:27.254 256+0 records in 00:33:27.254 256+0 records out 00:33:27.254 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152 s, 6.9 MB/s 00:33:27.254 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:27.254 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:33:27.254 256+0 records in 00:33:27.254 256+0 records out 00:33:27.254 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.159753 s, 6.6 MB/s 00:33:27.254 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:33:27.254 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:33:27.254 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:27.254 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:33:27.254 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:33:27.254 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:33:27.254 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:33:27.254 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:27.254 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:33:27.254 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:27.254 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:33:27.254 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:27.254 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:33:27.254 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:27.254 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:33:27.254 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:27.254 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:33:27.254 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:27.254 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:33:27.514 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:33:27.514 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:33:27.514 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:27.514 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:33:27.514 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:27.514 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:33:27.514 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:27.514 07:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:27.514 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:27.514 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:27.514 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:27.514 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:27.514 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:27.514 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:27.514 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:27.514 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:27.514 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:27.514 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:33:28.082 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:28.082 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:28.082 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:28.082 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:28.082 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:28.082 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:28.082 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:28.082 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:28.082 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:28.082 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:33:28.082 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:33:28.082 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:33:28.082 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:33:28.082 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:28.082 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:28.082 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:33:28.082 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:28.082 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:28.082 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:28.082 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:33:28.341 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:33:28.341 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:33:28.341 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:33:28.341 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:28.341 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:28.341 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:33:28.341 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:28.341 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:28.341 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:28.341 07:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:33:28.907 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:33:28.907 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:33:28.907 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:33:28.907 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:28.907 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:28.907 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:33:28.907 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:28.907 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:28.907 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:28.907 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:33:29.164 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:33:29.164 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:33:29.164 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:33:29.164 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:29.164 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:29.164 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:33:29.164 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:29.164 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:29.164 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:29.164 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:29.164 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:29.422 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:29.422 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:29.422 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:29.422 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:29.422 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:33:29.422 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:29.422 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:33:29.422 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:33:29.422 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:33:29.422 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:33:29.422 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:33:29.422 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:33:29.422 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:33:29.422 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:29.422 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:33:29.422 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:33:29.422 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:33:29.422 07:42:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:33:29.679 malloc_lvol_verify 00:33:29.679 07:42:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:33:29.937 dbe368b6-71e6-4a22-8c04-f97d1e873413 00:33:29.937 07:42:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:33:30.196 f0d44412-7b34-4e8d-aee8-caaf089d64b3 00:33:30.196 07:42:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:33:30.454 /dev/nbd0 00:33:30.454 07:42:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:33:30.454 mke2fs 1.46.5 (30-Dec-2021) 00:33:30.454 Discarding device blocks: 0/4096 done 00:33:30.454 Creating filesystem with 4096 1k blocks and 1024 inodes 00:33:30.454 00:33:30.454 Allocating group tables: 0/1 done 00:33:30.454 Writing inode tables: 0/1 done 00:33:30.454 Creating journal (1024 blocks): done 00:33:30.454 Writing superblocks and filesystem accounting information: 0/1 done 00:33:30.454 00:33:30.454 07:42:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:33:30.454 07:42:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:30.454 07:42:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:30.454 07:42:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:30.454 07:42:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:30.454 07:42:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:33:30.454 07:42:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:30.454 07:42:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:30.711 07:42:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:30.711 07:42:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:30.711 07:42:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:30.711 07:42:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:30.711 07:42:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:30.711 07:42:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:30.711 07:42:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:30.711 07:42:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:30.711 07:42:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:33:30.711 07:42:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:33:30.711 07:42:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 76856 00:33:30.711 07:42:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 76856 ']' 00:33:30.711 07:42:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 76856 00:33:30.711 07:42:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:33:30.711 07:42:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:30.711 07:42:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76856 00:33:30.711 07:42:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:30.711 07:42:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:30.711 killing process with pid 76856 00:33:30.711 07:42:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76856' 00:33:30.711 07:42:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 76856 00:33:30.711 07:42:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 76856 00:33:32.105 ************************************ 00:33:32.105 END TEST bdev_nbd 00:33:32.105 ************************************ 00:33:32.105 07:42:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:33:32.105 00:33:32.105 real 0m13.254s 00:33:32.105 user 0m18.418s 00:33:32.105 sys 0m4.516s 00:33:32.105 07:42:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:32.105 07:42:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:33:32.363 07:42:10 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:33:32.363 07:42:10 blockdev_xnvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:33:32.363 07:42:10 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = nvme ']' 00:33:32.363 07:42:10 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = gpt ']' 00:33:32.363 07:42:10 blockdev_xnvme -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:33:32.363 07:42:10 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:32.363 07:42:10 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:32.363 07:42:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:33:32.363 ************************************ 00:33:32.363 START TEST bdev_fio 00:33:32.363 ************************************ 00:33:32.363 07:42:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:33:32.363 07:42:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:33:32.363 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:33:32.363 07:42:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:33:32.363 07:42:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:33:32.363 07:42:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:33:32.363 07:42:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:33:32.363 07:42:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:33:32.363 07:42:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:33:32.363 07:42:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:33:32.363 07:42:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:33:32.363 07:42:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:33:32.363 07:42:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:33:32.363 07:42:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:33:32.363 07:42:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:33:32.363 07:42:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:33:32.363 07:42:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:33:32.363 07:42:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:33:32.363 07:42:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:33:32.363 07:42:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:33:32.363 07:42:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:33:32.363 07:42:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:33:32.363 07:42:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:33:32.363 07:42:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:33:32.363 07:42:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:33:32.363 07:42:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:33:32.363 07:42:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n1]' 00:33:32.363 07:42:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme0n1 00:33:32.363 07:42:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:33:32.363 07:42:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme1n1]' 00:33:32.363 07:42:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme1n1 00:33:32.363 07:42:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:33:32.363 07:42:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n1]' 00:33:32.364 07:42:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n1 00:33:32.364 07:42:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:33:32.364 07:42:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n2]' 00:33:32.364 07:42:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n2 00:33:32.364 07:42:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:33:32.364 07:42:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n3]' 00:33:32.364 07:42:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n3 00:33:32.364 07:42:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:33:32.364 07:42:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme3n1]' 00:33:32.364 07:42:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme3n1 00:33:32.364 07:42:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:33:32.364 07:42:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:33:32.364 07:42:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:33:32.364 07:42:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:32.364 07:42:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:33:32.364 ************************************ 00:33:32.364 START TEST bdev_fio_rw_verify 00:33:32.364 ************************************ 00:33:32.364 07:42:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:33:32.364 07:42:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:33:32.364 07:42:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:32.364 07:42:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:32.364 07:42:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:32.364 07:42:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:32.364 07:42:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:33:32.364 07:42:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:32.364 07:42:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:32.364 07:42:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:32.364 07:42:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:33:32.364 07:42:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:32.364 07:42:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:32.364 07:42:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:32.364 07:42:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:33:32.364 07:42:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:33:32.364 07:42:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:33:32.622 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:32.622 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:32.622 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:32.622 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:32.622 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:32.622 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:32.622 fio-3.35 00:33:32.622 Starting 6 threads 00:33:44.823 00:33:44.823 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=77287: Mon Jul 15 07:42:22 2024 00:33:44.823 read: IOPS=27.3k, BW=107MiB/s (112MB/s)(1066MiB/10001msec) 00:33:44.823 slat (usec): min=3, max=768, avg= 7.30, stdev= 4.70 00:33:44.823 clat (usec): min=125, max=4654, avg=682.62, stdev=244.90 00:33:44.823 lat (usec): min=132, max=4659, avg=689.91, stdev=245.56 00:33:44.823 clat percentiles (usec): 00:33:44.823 | 50.000th=[ 709], 99.000th=[ 1270], 99.900th=[ 1827], 99.990th=[ 3884], 00:33:44.823 | 99.999th=[ 4293] 00:33:44.823 write: IOPS=27.5k, BW=107MiB/s (112MB/s)(1072MiB/10001msec); 0 zone resets 00:33:44.823 slat (usec): min=13, max=3369, avg=28.28, stdev=34.27 00:33:44.823 clat (usec): min=87, max=5046, avg=785.56, stdev=256.69 00:33:44.823 lat (usec): min=111, max=5174, avg=813.84, stdev=260.00 00:33:44.823 clat percentiles (usec): 00:33:44.823 | 50.000th=[ 791], 99.000th=[ 1500], 99.900th=[ 2040], 99.990th=[ 2835], 00:33:44.823 | 99.999th=[ 4817] 00:33:44.823 bw ( KiB/s): min=93451, max=137048, per=99.89%, avg=109684.58, stdev=1988.57, samples=114 00:33:44.823 iops : min=23362, max=34262, avg=27420.63, stdev=497.13, samples=114 00:33:44.823 lat (usec) : 100=0.01%, 250=2.54%, 500=15.83%, 750=31.67%, 1000=38.46% 00:33:44.823 lat (msec) : 2=11.41%, 4=0.09%, 10=0.01% 00:33:44.823 cpu : usr=58.61%, sys=27.01%, ctx=7850, majf=0, minf=23408 00:33:44.823 IO depths : 1=11.6%, 2=24.0%, 4=50.9%, 8=13.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:44.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.823 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.823 issued rwts: total=272837,274550,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:44.823 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:44.823 00:33:44.823 Run status group 0 (all jobs): 00:33:44.823 READ: bw=107MiB/s (112MB/s), 107MiB/s-107MiB/s (112MB/s-112MB/s), io=1066MiB (1118MB), run=10001-10001msec 00:33:44.823 WRITE: bw=107MiB/s (112MB/s), 107MiB/s-107MiB/s (112MB/s-112MB/s), io=1072MiB (1125MB), run=10001-10001msec 00:33:45.081 ----------------------------------------------------- 00:33:45.081 Suppressions used: 00:33:45.081 count bytes template 00:33:45.081 6 48 /usr/src/fio/parse.c 00:33:45.081 1522 146112 /usr/src/fio/iolog.c 00:33:45.081 1 8 libtcmalloc_minimal.so 00:33:45.081 1 904 libcrypto.so 00:33:45.081 ----------------------------------------------------- 00:33:45.081 00:33:45.081 00:33:45.081 real 0m12.653s 00:33:45.081 user 0m37.228s 00:33:45.081 sys 0m16.683s 00:33:45.081 07:42:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:45.081 07:42:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:33:45.081 ************************************ 00:33:45.081 END TEST bdev_fio_rw_verify 00:33:45.081 ************************************ 00:33:45.081 07:42:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:33:45.081 07:42:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:33:45.081 07:42:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:33:45.081 07:42:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:33:45.081 07:42:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:33:45.081 07:42:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:33:45.081 07:42:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:33:45.081 07:42:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:33:45.081 07:42:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:33:45.081 07:42:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:33:45.081 07:42:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:33:45.081 07:42:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:33:45.081 07:42:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:33:45.081 07:42:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:33:45.081 07:42:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:33:45.081 07:42:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:33:45.081 07:42:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:33:45.081 07:42:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:33:45.081 07:42:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "5ca8bf42-2cc8-4ffc-bb84-bdd7c9cd3d9a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "5ca8bf42-2cc8-4ffc-bb84-bdd7c9cd3d9a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "b74770ec-f5a4-474c-bd7b-58b818562183"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b74770ec-f5a4-474c-bd7b-58b818562183",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "49a8af11-3b6c-4f72-b70c-5c2d8250be8a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "49a8af11-3b6c-4f72-b70c-5c2d8250be8a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "485503eb-af5d-4880-abf8-b8450f55221f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "485503eb-af5d-4880-abf8-b8450f55221f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "b2554b6c-bff3-4f44-88dd-1adb6ecaf4e8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b2554b6c-bff3-4f44-88dd-1adb6ecaf4e8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "1c95daa7-9a2b-4e1f-b5c5-2c80db2c4070"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1c95daa7-9a2b-4e1f-b5c5-2c80db2c4070",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:33:45.081 07:42:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:33:45.081 07:42:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:33:45.081 /home/vagrant/spdk_repo/spdk 00:33:45.081 07:42:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # popd 00:33:45.081 07:42:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:33:45.081 07:42:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@364 -- # return 0 00:33:45.081 00:33:45.081 real 0m12.841s 00:33:45.081 user 0m37.337s 00:33:45.081 sys 0m16.760s 00:33:45.081 07:42:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:45.081 ************************************ 00:33:45.081 END TEST bdev_fio 00:33:45.081 ************************************ 00:33:45.081 07:42:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:33:45.081 07:42:23 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:33:45.081 07:42:23 blockdev_xnvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:45.081 07:42:23 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:33:45.081 07:42:23 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:33:45.081 07:42:23 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:45.081 07:42:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:33:45.081 ************************************ 00:33:45.081 START TEST bdev_verify 00:33:45.081 ************************************ 00:33:45.081 07:42:23 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:33:45.339 [2024-07-15 07:42:23.739721] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:33:45.339 [2024-07-15 07:42:23.739923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77461 ] 00:33:45.339 [2024-07-15 07:42:23.914797] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:45.906 [2024-07-15 07:42:24.212630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:45.906 [2024-07-15 07:42:24.212651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:46.164 Running I/O for 5 seconds... 00:33:51.430 00:33:51.430 Latency(us) 00:33:51.430 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:51.430 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:51.430 Verification LBA range: start 0x0 length 0xa0000 00:33:51.430 nvme0n1 : 5.01 1582.59 6.18 0.00 0.00 80729.12 13107.20 74353.57 00:33:51.430 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:33:51.430 Verification LBA range: start 0xa0000 length 0xa0000 00:33:51.430 nvme0n1 : 5.08 1537.22 6.00 0.00 0.00 83111.80 15371.17 85315.96 00:33:51.430 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:51.430 Verification LBA range: start 0x0 length 0xbd0bd 00:33:51.430 nvme1n1 : 5.07 2874.53 11.23 0.00 0.00 44271.22 5123.72 74353.57 00:33:51.430 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:33:51.430 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:33:51.430 nvme1n1 : 5.07 2775.53 10.84 0.00 0.00 45878.82 5123.72 72923.69 00:33:51.430 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:51.430 Verification LBA range: start 0x0 length 0x80000 00:33:51.430 nvme2n1 : 5.07 1591.12 6.22 0.00 0.00 79843.71 7923.90 65774.31 00:33:51.430 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:33:51.430 Verification LBA range: start 0x80000 length 0x80000 00:33:51.430 nvme2n1 : 5.08 1536.61 6.00 0.00 0.00 82665.46 13524.25 71970.44 00:33:51.430 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:51.430 Verification LBA range: start 0x0 length 0x80000 00:33:51.430 nvme2n2 : 5.06 1593.25 6.22 0.00 0.00 79577.93 8996.31 81026.33 00:33:51.430 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:33:51.430 Verification LBA range: start 0x80000 length 0x80000 00:33:51.431 nvme2n2 : 5.08 1538.07 6.01 0.00 0.00 82418.61 11915.64 73400.32 00:33:51.431 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:51.431 Verification LBA range: start 0x0 length 0x80000 00:33:51.431 nvme2n3 : 5.07 1590.17 6.21 0.00 0.00 79577.78 7417.48 75306.82 00:33:51.431 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:33:51.431 Verification LBA range: start 0x80000 length 0x80000 00:33:51.431 nvme2n3 : 5.08 1536.01 6.00 0.00 0.00 82362.39 14120.03 70063.94 00:33:51.431 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:51.431 Verification LBA range: start 0x0 length 0x20000 00:33:51.431 nvme3n1 : 5.07 1589.49 6.21 0.00 0.00 79456.14 8400.52 85315.96 00:33:51.431 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:33:51.431 Verification LBA range: start 0x20000 length 0x20000 00:33:51.431 nvme3n1 : 5.09 1535.40 6.00 0.00 0.00 82235.09 12571.00 78643.20 00:33:51.431 =================================================================================================================== 00:33:51.431 Total : 21279.98 83.12 0.00 0.00 71588.04 5123.72 85315.96 00:33:52.806 00:33:52.806 real 0m7.636s 00:33:52.806 user 0m11.776s 00:33:52.806 sys 0m1.888s 00:33:52.806 07:42:31 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:52.806 07:42:31 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:33:52.806 ************************************ 00:33:52.806 END TEST bdev_verify 00:33:52.806 ************************************ 00:33:52.806 07:42:31 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:33:52.806 07:42:31 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:33:52.806 07:42:31 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:33:52.806 07:42:31 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:52.806 07:42:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:33:52.806 ************************************ 00:33:52.806 START TEST bdev_verify_big_io 00:33:52.806 ************************************ 00:33:52.806 07:42:31 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:33:53.080 [2024-07-15 07:42:31.429655] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:33:53.080 [2024-07-15 07:42:31.430129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77563 ] 00:33:53.080 [2024-07-15 07:42:31.596696] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:53.339 [2024-07-15 07:42:31.878503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:53.339 [2024-07-15 07:42:31.878526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:54.275 Running I/O for 5 seconds... 00:34:00.828 00:34:00.829 Latency(us) 00:34:00.829 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:00.829 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:00.829 Verification LBA range: start 0x0 length 0xa000 00:34:00.829 nvme0n1 : 5.81 113.00 7.06 0.00 0.00 1078668.79 135361.63 1761607.68 00:34:00.829 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:00.829 Verification LBA range: start 0xa000 length 0xa000 00:34:00.829 nvme0n1 : 5.77 112.40 7.02 0.00 0.00 1088990.97 38368.35 2669102.55 00:34:00.829 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:00.829 Verification LBA range: start 0x0 length 0xbd0b 00:34:00.829 nvme1n1 : 5.85 175.11 10.94 0.00 0.00 696584.52 11439.01 789291.75 00:34:00.829 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:00.829 Verification LBA range: start 0xbd0b length 0xbd0b 00:34:00.829 nvme1n1 : 5.78 186.14 11.63 0.00 0.00 657985.62 12511.42 960876.92 00:34:00.829 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:00.829 Verification LBA range: start 0x0 length 0x8000 00:34:00.829 nvme2n1 : 5.90 92.15 5.76 0.00 0.00 1286641.21 71493.82 1853119.77 00:34:00.829 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:00.829 Verification LBA range: start 0x8000 length 0x8000 00:34:00.829 nvme2n1 : 5.79 167.25 10.45 0.00 0.00 716998.06 17635.14 884616.84 00:34:00.829 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:00.829 Verification LBA range: start 0x0 length 0x8000 00:34:00.829 nvme2n2 : 5.85 136.71 8.54 0.00 0.00 838106.24 40513.16 1189657.13 00:34:00.829 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:00.829 Verification LBA range: start 0x8000 length 0x8000 00:34:00.829 nvme2n2 : 5.77 171.84 10.74 0.00 0.00 683041.75 20494.89 1014258.97 00:34:00.829 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:00.829 Verification LBA range: start 0x0 length 0x8000 00:34:00.829 nvme2n3 : 5.88 160.45 10.03 0.00 0.00 690446.37 41943.04 842673.80 00:34:00.829 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:00.829 Verification LBA range: start 0x8000 length 0x8000 00:34:00.829 nvme2n3 : 5.79 185.13 11.57 0.00 0.00 619393.34 15609.48 697779.67 00:34:00.829 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:00.829 Verification LBA range: start 0x0 length 0x2000 00:34:00.829 nvme3n1 : 5.90 162.73 10.17 0.00 0.00 663529.84 8340.95 1731103.65 00:34:00.829 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:00.829 Verification LBA range: start 0x2000 length 0x2000 00:34:00.829 nvme3n1 : 5.78 156.28 9.77 0.00 0.00 718954.11 21090.68 1784485.70 00:34:00.829 =================================================================================================================== 00:34:00.829 Total : 1819.20 113.70 0.00 0.00 772507.70 8340.95 2669102.55 00:34:01.395 00:34:01.395 real 0m8.609s 00:34:01.395 user 0m15.079s 00:34:01.395 sys 0m0.790s 00:34:01.395 ************************************ 00:34:01.395 END TEST bdev_verify_big_io 00:34:01.395 07:42:39 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:01.395 07:42:39 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:34:01.395 ************************************ 00:34:01.395 07:42:39 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:34:01.395 07:42:39 blockdev_xnvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:01.395 07:42:39 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:34:01.395 07:42:39 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:01.395 07:42:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:01.395 ************************************ 00:34:01.395 START TEST bdev_write_zeroes 00:34:01.395 ************************************ 00:34:01.395 07:42:39 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:01.654 [2024-07-15 07:42:40.112089] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:34:01.654 [2024-07-15 07:42:40.112618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77679 ] 00:34:01.913 [2024-07-15 07:42:40.293646] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:02.171 [2024-07-15 07:42:40.538318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:02.736 Running I/O for 1 seconds... 00:34:03.671 00:34:03.671 Latency(us) 00:34:03.671 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:03.671 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:03.671 nvme0n1 : 1.01 9712.38 37.94 0.00 0.00 13161.93 7357.91 18707.55 00:34:03.671 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:03.671 nvme1n1 : 1.02 15851.60 61.92 0.00 0.00 8036.33 4587.52 16920.20 00:34:03.671 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:03.671 nvme2n1 : 1.02 9686.01 37.84 0.00 0.00 13097.12 7238.75 19660.80 00:34:03.671 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:03.671 nvme2n2 : 1.02 9671.21 37.78 0.00 0.00 13107.07 7238.75 19541.64 00:34:03.671 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:03.671 nvme2n3 : 1.02 9657.14 37.72 0.00 0.00 13116.59 7119.59 19422.49 00:34:03.671 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:03.671 nvme3n1 : 1.02 9642.81 37.67 0.00 0.00 13122.48 7060.01 19660.80 00:34:03.671 =================================================================================================================== 00:34:03.671 Total : 64221.15 250.86 0.00 0.00 11868.56 4587.52 19660.80 00:34:05.049 00:34:05.049 real 0m3.444s 00:34:05.049 user 0m2.616s 00:34:05.049 sys 0m0.637s 00:34:05.049 07:42:43 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:05.049 07:42:43 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:34:05.049 ************************************ 00:34:05.049 END TEST bdev_write_zeroes 00:34:05.049 ************************************ 00:34:05.049 07:42:43 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:34:05.049 07:42:43 blockdev_xnvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:05.049 07:42:43 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:34:05.049 07:42:43 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:05.049 07:42:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:05.049 ************************************ 00:34:05.049 START TEST bdev_json_nonenclosed 00:34:05.049 ************************************ 00:34:05.049 07:42:43 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:05.049 [2024-07-15 07:42:43.606696] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:34:05.049 [2024-07-15 07:42:43.606933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77740 ] 00:34:05.306 [2024-07-15 07:42:43.788589] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:05.615 [2024-07-15 07:42:44.060689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:05.615 [2024-07-15 07:42:44.060828] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:34:05.615 [2024-07-15 07:42:44.060856] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:05.615 [2024-07-15 07:42:44.060877] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:06.182 00:34:06.182 real 0m1.051s 00:34:06.182 user 0m0.761s 00:34:06.182 sys 0m0.183s 00:34:06.182 07:42:44 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:34:06.182 07:42:44 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:06.182 07:42:44 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:34:06.182 ************************************ 00:34:06.182 END TEST bdev_json_nonenclosed 00:34:06.182 ************************************ 00:34:06.182 07:42:44 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 234 00:34:06.182 07:42:44 blockdev_xnvme -- bdev/blockdev.sh@782 -- # true 00:34:06.182 07:42:44 blockdev_xnvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:06.182 07:42:44 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:34:06.182 07:42:44 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:06.182 07:42:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:06.182 ************************************ 00:34:06.182 START TEST bdev_json_nonarray 00:34:06.182 ************************************ 00:34:06.182 07:42:44 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:06.182 [2024-07-15 07:42:44.721546] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:34:06.182 [2024-07-15 07:42:44.721752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77771 ] 00:34:06.440 [2024-07-15 07:42:44.904687] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:06.699 [2024-07-15 07:42:45.194234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:06.699 [2024-07-15 07:42:45.194401] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:34:06.699 [2024-07-15 07:42:45.194434] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:06.699 [2024-07-15 07:42:45.194478] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:07.265 00:34:07.265 real 0m1.088s 00:34:07.265 user 0m0.773s 00:34:07.265 sys 0m0.206s 00:34:07.265 ************************************ 00:34:07.265 END TEST bdev_json_nonarray 00:34:07.265 ************************************ 00:34:07.265 07:42:45 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:34:07.265 07:42:45 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:07.265 07:42:45 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:34:07.265 07:42:45 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 234 00:34:07.265 07:42:45 blockdev_xnvme -- bdev/blockdev.sh@785 -- # true 00:34:07.265 07:42:45 blockdev_xnvme -- bdev/blockdev.sh@787 -- # [[ xnvme == bdev ]] 00:34:07.265 07:42:45 blockdev_xnvme -- bdev/blockdev.sh@794 -- # [[ xnvme == gpt ]] 00:34:07.265 07:42:45 blockdev_xnvme -- bdev/blockdev.sh@798 -- # [[ xnvme == crypto_sw ]] 00:34:07.265 07:42:45 blockdev_xnvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:34:07.265 07:42:45 blockdev_xnvme -- bdev/blockdev.sh@811 -- # cleanup 00:34:07.265 07:42:45 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:34:07.265 07:42:45 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:07.265 07:42:45 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:34:07.265 07:42:45 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:34:07.265 07:42:45 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:34:07.265 07:42:45 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:34:07.265 07:42:45 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:34:07.832 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:08.399 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:34:08.399 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:34:10.315 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:34:10.315 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:34:10.315 ************************************ 00:34:10.315 END TEST blockdev_xnvme 00:34:10.315 ************************************ 00:34:10.315 00:34:10.315 real 1m7.229s 00:34:10.315 user 1m46.703s 00:34:10.315 sys 0m33.649s 00:34:10.315 07:42:48 blockdev_xnvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:10.315 07:42:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:10.315 07:42:48 -- common/autotest_common.sh@1142 -- # return 0 00:34:10.315 07:42:48 -- spdk/autotest.sh@251 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:34:10.315 07:42:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:10.315 07:42:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:10.315 07:42:48 -- common/autotest_common.sh@10 -- # set +x 00:34:10.315 ************************************ 00:34:10.315 START TEST ublk 00:34:10.315 ************************************ 00:34:10.315 07:42:48 ublk -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:34:10.315 * Looking for test storage... 00:34:10.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:34:10.315 07:42:48 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:34:10.315 07:42:48 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:34:10.315 07:42:48 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:34:10.315 07:42:48 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:34:10.315 07:42:48 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:34:10.315 07:42:48 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:34:10.315 07:42:48 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:34:10.315 07:42:48 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:34:10.315 07:42:48 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:34:10.315 07:42:48 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:34:10.315 07:42:48 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:34:10.315 07:42:48 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:34:10.315 07:42:48 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:34:10.315 07:42:48 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:34:10.315 07:42:48 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:34:10.315 07:42:48 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:34:10.315 07:42:48 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:34:10.315 07:42:48 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:34:10.315 07:42:48 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:34:10.592 07:42:48 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:34:10.592 07:42:48 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:10.592 07:42:48 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:10.592 07:42:48 ublk -- common/autotest_common.sh@10 -- # set +x 00:34:10.592 ************************************ 00:34:10.592 START TEST test_save_ublk_config 00:34:10.592 ************************************ 00:34:10.592 07:42:48 ublk.test_save_ublk_config -- common/autotest_common.sh@1123 -- # test_save_config 00:34:10.592 07:42:48 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:34:10.592 07:42:48 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=78057 00:34:10.592 07:42:48 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:34:10.592 07:42:48 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:34:10.592 07:42:48 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 78057 00:34:10.592 07:42:48 ublk.test_save_ublk_config -- common/autotest_common.sh@829 -- # '[' -z 78057 ']' 00:34:10.592 07:42:48 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:10.592 07:42:48 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:10.592 07:42:48 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:10.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:10.592 07:42:48 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:10.592 07:42:48 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:34:10.592 [2024-07-15 07:42:49.049496] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:34:10.592 [2024-07-15 07:42:49.049906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78057 ] 00:34:10.850 [2024-07-15 07:42:49.235436] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:11.108 [2024-07-15 07:42:49.516149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:12.041 07:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:12.041 07:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # return 0 00:34:12.041 07:42:50 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:34:12.041 07:42:50 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:34:12.041 07:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.041 07:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:34:12.041 [2024-07-15 07:42:50.436569] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:34:12.041 [2024-07-15 07:42:50.437923] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:34:12.041 malloc0 00:34:12.042 [2024-07-15 07:42:50.532770] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:34:12.042 [2024-07-15 07:42:50.532937] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:34:12.042 [2024-07-15 07:42:50.532956] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:34:12.042 [2024-07-15 07:42:50.532970] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:34:12.042 [2024-07-15 07:42:50.536860] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:34:12.042 [2024-07-15 07:42:50.536902] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:34:12.042 [2024-07-15 07:42:50.547559] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:34:12.042 [2024-07-15 07:42:50.547779] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:34:12.042 [2024-07-15 07:42:50.570208] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:34:12.042 0 00:34:12.042 07:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.042 07:42:50 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:34:12.042 07:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.042 07:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:34:12.300 07:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.300 07:42:50 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:34:12.300 "subsystems": [ 00:34:12.300 { 00:34:12.300 "subsystem": "keyring", 00:34:12.300 "config": [] 00:34:12.300 }, 00:34:12.300 { 00:34:12.300 "subsystem": "iobuf", 00:34:12.300 "config": [ 00:34:12.300 { 00:34:12.300 "method": "iobuf_set_options", 00:34:12.300 "params": { 00:34:12.300 "small_pool_count": 8192, 00:34:12.300 "large_pool_count": 1024, 00:34:12.300 "small_bufsize": 8192, 00:34:12.300 "large_bufsize": 135168 00:34:12.300 } 00:34:12.300 } 00:34:12.300 ] 00:34:12.300 }, 00:34:12.300 { 00:34:12.300 "subsystem": "sock", 00:34:12.300 "config": [ 00:34:12.300 { 00:34:12.300 "method": "sock_set_default_impl", 00:34:12.300 "params": { 00:34:12.300 "impl_name": "posix" 00:34:12.300 } 00:34:12.300 }, 00:34:12.300 { 00:34:12.300 "method": "sock_impl_set_options", 00:34:12.300 "params": { 00:34:12.300 "impl_name": "ssl", 00:34:12.300 "recv_buf_size": 4096, 00:34:12.300 "send_buf_size": 4096, 00:34:12.300 "enable_recv_pipe": true, 00:34:12.300 "enable_quickack": false, 00:34:12.300 "enable_placement_id": 0, 00:34:12.300 "enable_zerocopy_send_server": true, 00:34:12.300 "enable_zerocopy_send_client": false, 00:34:12.300 "zerocopy_threshold": 0, 00:34:12.300 "tls_version": 0, 00:34:12.300 "enable_ktls": false 00:34:12.300 } 00:34:12.300 }, 00:34:12.300 { 00:34:12.300 "method": "sock_impl_set_options", 00:34:12.300 "params": { 00:34:12.300 "impl_name": "posix", 00:34:12.300 "recv_buf_size": 2097152, 00:34:12.300 "send_buf_size": 2097152, 00:34:12.300 "enable_recv_pipe": true, 00:34:12.300 "enable_quickack": false, 00:34:12.300 "enable_placement_id": 0, 00:34:12.300 "enable_zerocopy_send_server": true, 00:34:12.300 "enable_zerocopy_send_client": false, 00:34:12.300 "zerocopy_threshold": 0, 00:34:12.300 "tls_version": 0, 00:34:12.300 "enable_ktls": false 00:34:12.300 } 00:34:12.300 } 00:34:12.301 ] 00:34:12.301 }, 00:34:12.301 { 00:34:12.301 "subsystem": "vmd", 00:34:12.301 "config": [] 00:34:12.301 }, 00:34:12.301 { 00:34:12.301 "subsystem": "accel", 00:34:12.301 "config": [ 00:34:12.301 { 00:34:12.301 "method": "accel_set_options", 00:34:12.301 "params": { 00:34:12.301 "small_cache_size": 128, 00:34:12.301 "large_cache_size": 16, 00:34:12.301 "task_count": 2048, 00:34:12.301 "sequence_count": 2048, 00:34:12.301 "buf_count": 2048 00:34:12.301 } 00:34:12.301 } 00:34:12.301 ] 00:34:12.301 }, 00:34:12.301 { 00:34:12.301 "subsystem": "bdev", 00:34:12.301 "config": [ 00:34:12.301 { 00:34:12.301 "method": "bdev_set_options", 00:34:12.301 "params": { 00:34:12.301 "bdev_io_pool_size": 65535, 00:34:12.301 "bdev_io_cache_size": 256, 00:34:12.301 "bdev_auto_examine": true, 00:34:12.301 "iobuf_small_cache_size": 128, 00:34:12.301 "iobuf_large_cache_size": 16 00:34:12.301 } 00:34:12.301 }, 00:34:12.301 { 00:34:12.301 "method": "bdev_raid_set_options", 00:34:12.301 "params": { 00:34:12.301 "process_window_size_kb": 1024 00:34:12.301 } 00:34:12.301 }, 00:34:12.301 { 00:34:12.301 "method": "bdev_iscsi_set_options", 00:34:12.301 "params": { 00:34:12.301 "timeout_sec": 30 00:34:12.301 } 00:34:12.301 }, 00:34:12.301 { 00:34:12.301 "method": "bdev_nvme_set_options", 00:34:12.301 "params": { 00:34:12.301 "action_on_timeout": "none", 00:34:12.301 "timeout_us": 0, 00:34:12.301 "timeout_admin_us": 0, 00:34:12.301 "keep_alive_timeout_ms": 10000, 00:34:12.301 "arbitration_burst": 0, 00:34:12.301 "low_priority_weight": 0, 00:34:12.301 "medium_priority_weight": 0, 00:34:12.301 "high_priority_weight": 0, 00:34:12.301 "nvme_adminq_poll_period_us": 10000, 00:34:12.301 "nvme_ioq_poll_period_us": 0, 00:34:12.301 "io_queue_requests": 0, 00:34:12.301 "delay_cmd_submit": true, 00:34:12.301 "transport_retry_count": 4, 00:34:12.301 "bdev_retry_count": 3, 00:34:12.301 "transport_ack_timeout": 0, 00:34:12.301 "ctrlr_loss_timeout_sec": 0, 00:34:12.301 "reconnect_delay_sec": 0, 00:34:12.301 "fast_io_fail_timeout_sec": 0, 00:34:12.301 "disable_auto_failback": false, 00:34:12.301 "generate_uuids": false, 00:34:12.301 "transport_tos": 0, 00:34:12.301 "nvme_error_stat": false, 00:34:12.301 "rdma_srq_size": 0, 00:34:12.301 "io_path_stat": false, 00:34:12.301 "allow_accel_sequence": false, 00:34:12.301 "rdma_max_cq_size": 0, 00:34:12.301 "rdma_cm_event_timeout_ms": 0, 00:34:12.301 "dhchap_digests": [ 00:34:12.301 "sha256", 00:34:12.301 "sha384", 00:34:12.301 "sha512" 00:34:12.301 ], 00:34:12.301 "dhchap_dhgroups": [ 00:34:12.301 "null", 00:34:12.301 "ffdhe2048", 00:34:12.301 "ffdhe3072", 00:34:12.301 "ffdhe4096", 00:34:12.301 "ffdhe6144", 00:34:12.301 "ffdhe8192" 00:34:12.301 ] 00:34:12.301 } 00:34:12.301 }, 00:34:12.301 { 00:34:12.301 "method": "bdev_nvme_set_hotplug", 00:34:12.301 "params": { 00:34:12.301 "period_us": 100000, 00:34:12.301 "enable": false 00:34:12.301 } 00:34:12.301 }, 00:34:12.301 { 00:34:12.301 "method": "bdev_malloc_create", 00:34:12.301 "params": { 00:34:12.301 "name": "malloc0", 00:34:12.301 "num_blocks": 8192, 00:34:12.301 "block_size": 4096, 00:34:12.301 "physical_block_size": 4096, 00:34:12.301 "uuid": "86afd4ff-1c0f-420b-9f33-b3d648fef2e5", 00:34:12.301 "optimal_io_boundary": 0 00:34:12.301 } 00:34:12.301 }, 00:34:12.301 { 00:34:12.301 "method": "bdev_wait_for_examine" 00:34:12.301 } 00:34:12.301 ] 00:34:12.301 }, 00:34:12.301 { 00:34:12.301 "subsystem": "scsi", 00:34:12.301 "config": null 00:34:12.301 }, 00:34:12.301 { 00:34:12.301 "subsystem": "scheduler", 00:34:12.301 "config": [ 00:34:12.301 { 00:34:12.301 "method": "framework_set_scheduler", 00:34:12.301 "params": { 00:34:12.301 "name": "static" 00:34:12.301 } 00:34:12.301 } 00:34:12.301 ] 00:34:12.301 }, 00:34:12.301 { 00:34:12.301 "subsystem": "vhost_scsi", 00:34:12.301 "config": [] 00:34:12.301 }, 00:34:12.301 { 00:34:12.301 "subsystem": "vhost_blk", 00:34:12.301 "config": [] 00:34:12.301 }, 00:34:12.301 { 00:34:12.301 "subsystem": "ublk", 00:34:12.301 "config": [ 00:34:12.301 { 00:34:12.301 "method": "ublk_create_target", 00:34:12.301 "params": { 00:34:12.301 "cpumask": "1" 00:34:12.301 } 00:34:12.301 }, 00:34:12.301 { 00:34:12.301 "method": "ublk_start_disk", 00:34:12.301 "params": { 00:34:12.301 "bdev_name": "malloc0", 00:34:12.301 "ublk_id": 0, 00:34:12.301 "num_queues": 1, 00:34:12.301 "queue_depth": 128 00:34:12.301 } 00:34:12.301 } 00:34:12.301 ] 00:34:12.301 }, 00:34:12.301 { 00:34:12.301 "subsystem": "nbd", 00:34:12.301 "config": [] 00:34:12.301 }, 00:34:12.301 { 00:34:12.301 "subsystem": "nvmf", 00:34:12.301 "config": [ 00:34:12.301 { 00:34:12.301 "method": "nvmf_set_config", 00:34:12.301 "params": { 00:34:12.301 "discovery_filter": "match_any", 00:34:12.301 "admin_cmd_passthru": { 00:34:12.301 "identify_ctrlr": false 00:34:12.301 } 00:34:12.301 } 00:34:12.301 }, 00:34:12.301 { 00:34:12.301 "method": "nvmf_set_max_subsystems", 00:34:12.301 "params": { 00:34:12.301 "max_subsystems": 1024 00:34:12.301 } 00:34:12.301 }, 00:34:12.301 { 00:34:12.301 "method": "nvmf_set_crdt", 00:34:12.301 "params": { 00:34:12.301 "crdt1": 0, 00:34:12.301 "crdt2": 0, 00:34:12.301 "crdt3": 0 00:34:12.301 } 00:34:12.301 } 00:34:12.301 ] 00:34:12.301 }, 00:34:12.301 { 00:34:12.301 "subsystem": "iscsi", 00:34:12.301 "config": [ 00:34:12.301 { 00:34:12.301 "method": "iscsi_set_options", 00:34:12.301 "params": { 00:34:12.301 "node_base": "iqn.2016-06.io.spdk", 00:34:12.301 "max_sessions": 128, 00:34:12.301 "max_connections_per_session": 2, 00:34:12.301 "max_queue_depth": 64, 00:34:12.301 "default_time2wait": 2, 00:34:12.301 "default_time2retain": 20, 00:34:12.301 "first_burst_length": 8192, 00:34:12.301 "immediate_data": true, 00:34:12.301 "allow_duplicated_isid": false, 00:34:12.301 "error_recovery_level": 0, 00:34:12.301 "nop_timeout": 60, 00:34:12.301 "nop_in_interval": 30, 00:34:12.301 "disable_chap": false, 00:34:12.301 "require_chap": false, 00:34:12.301 "mutual_chap": false, 00:34:12.301 "chap_group": 0, 00:34:12.301 "max_large_datain_per_connection": 64, 00:34:12.301 "max_r2t_per_connection": 4, 00:34:12.301 "pdu_pool_size": 36864, 00:34:12.301 "immediate_data_pool_size": 16384, 00:34:12.301 "data_out_pool_size": 2048 00:34:12.301 } 00:34:12.301 } 00:34:12.301 ] 00:34:12.301 } 00:34:12.301 ] 00:34:12.301 }' 00:34:12.301 07:42:50 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 78057 00:34:12.301 07:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@948 -- # '[' -z 78057 ']' 00:34:12.301 07:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # kill -0 78057 00:34:12.301 07:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # uname 00:34:12.301 07:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:12.301 07:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78057 00:34:12.301 killing process with pid 78057 00:34:12.301 07:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:12.301 07:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:12.301 07:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78057' 00:34:12.301 07:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # kill 78057 00:34:12.301 07:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # wait 78057 00:34:14.198 [2024-07-15 07:42:52.355733] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:34:14.198 [2024-07-15 07:42:52.386648] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:34:14.198 [2024-07-15 07:42:52.389573] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:34:14.198 [2024-07-15 07:42:52.396524] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:34:14.198 [2024-07-15 07:42:52.396623] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:34:14.198 [2024-07-15 07:42:52.396640] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:34:14.198 [2024-07-15 07:42:52.396683] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:34:14.198 [2024-07-15 07:42:52.396892] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:34:15.571 07:42:53 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=78123 00:34:15.571 07:42:53 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 78123 00:34:15.571 07:42:53 ublk.test_save_ublk_config -- common/autotest_common.sh@829 -- # '[' -z 78123 ']' 00:34:15.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:15.571 07:42:53 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:15.571 07:42:53 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:15.571 07:42:53 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:15.571 07:42:53 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:15.571 07:42:53 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:34:15.571 07:42:53 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:34:15.571 "subsystems": [ 00:34:15.571 { 00:34:15.571 "subsystem": "keyring", 00:34:15.571 "config": [] 00:34:15.571 }, 00:34:15.571 { 00:34:15.571 "subsystem": "iobuf", 00:34:15.571 "config": [ 00:34:15.571 { 00:34:15.571 "method": "iobuf_set_options", 00:34:15.571 "params": { 00:34:15.571 "small_pool_count": 8192, 00:34:15.571 "large_pool_count": 1024, 00:34:15.571 "small_bufsize": 8192, 00:34:15.571 "large_bufsize": 135168 00:34:15.571 } 00:34:15.571 } 00:34:15.571 ] 00:34:15.571 }, 00:34:15.571 { 00:34:15.571 "subsystem": "sock", 00:34:15.571 "config": [ 00:34:15.571 { 00:34:15.571 "method": "sock_set_default_impl", 00:34:15.571 "params": { 00:34:15.571 "impl_name": "posix" 00:34:15.571 } 00:34:15.571 }, 00:34:15.571 { 00:34:15.571 "method": "sock_impl_set_options", 00:34:15.571 "params": { 00:34:15.571 "impl_name": "ssl", 00:34:15.571 "recv_buf_size": 4096, 00:34:15.571 "send_buf_size": 4096, 00:34:15.571 "enable_recv_pipe": true, 00:34:15.571 "enable_quickack": false, 00:34:15.571 "enable_placement_id": 0, 00:34:15.571 "enable_zerocopy_send_server": true, 00:34:15.571 "enable_zerocopy_send_client": false, 00:34:15.571 "zerocopy_threshold": 0, 00:34:15.571 "tls_version": 0, 00:34:15.571 "enable_ktls": false 00:34:15.571 } 00:34:15.571 }, 00:34:15.571 { 00:34:15.571 "method": "sock_impl_set_options", 00:34:15.571 "params": { 00:34:15.571 "impl_name": "posix", 00:34:15.571 "recv_buf_size": 2097152, 00:34:15.571 "send_buf_size": 2097152, 00:34:15.571 "enable_recv_pipe": true, 00:34:15.571 "enable_quickack": false, 00:34:15.571 "enable_placement_id": 0, 00:34:15.571 "enable_zerocopy_send_server": true, 00:34:15.571 "enable_zerocopy_send_client": false, 00:34:15.571 "zerocopy_threshold": 0, 00:34:15.571 "tls_version": 0, 00:34:15.571 "enable_ktls": false 00:34:15.571 } 00:34:15.571 } 00:34:15.571 ] 00:34:15.571 }, 00:34:15.571 { 00:34:15.571 "subsystem": "vmd", 00:34:15.571 "config": [] 00:34:15.571 }, 00:34:15.571 { 00:34:15.571 "subsystem": "accel", 00:34:15.571 "config": [ 00:34:15.571 { 00:34:15.571 "method": "accel_set_options", 00:34:15.571 "params": { 00:34:15.571 "small_cache_size": 128, 00:34:15.571 "large_cache_size": 16, 00:34:15.571 "task_count": 2048, 00:34:15.571 "sequence_count": 2048, 00:34:15.571 "buf_count": 2048 00:34:15.571 } 00:34:15.571 } 00:34:15.571 ] 00:34:15.571 }, 00:34:15.571 { 00:34:15.571 "subsystem": "bdev", 00:34:15.571 "config": [ 00:34:15.571 { 00:34:15.571 "method": "bdev_set_options", 00:34:15.571 "params": { 00:34:15.571 "bdev_io_pool_size": 65535, 00:34:15.571 "bdev_io_cache_size": 256, 00:34:15.571 "bdev_auto_examine": true, 00:34:15.571 "iobuf_small_cache_size": 128, 00:34:15.571 "iobuf_large_cache_size": 16 00:34:15.571 } 00:34:15.571 }, 00:34:15.571 { 00:34:15.571 "method": "bdev_raid_set_options", 00:34:15.571 "params": { 00:34:15.571 "process_window_size_kb": 1024 00:34:15.571 } 00:34:15.571 }, 00:34:15.571 { 00:34:15.571 "method": "bdev_iscsi_set_options", 00:34:15.571 "params": { 00:34:15.571 "timeout_sec": 30 00:34:15.571 } 00:34:15.571 }, 00:34:15.571 { 00:34:15.571 "method": "bdev_nvme_set_options", 00:34:15.571 "params": { 00:34:15.571 "action_on_timeout": "none", 00:34:15.571 "timeout_us": 0, 00:34:15.571 "timeout_admin_us": 0, 00:34:15.571 "keep_alive_timeout_ms": 10000, 00:34:15.571 "arbitration_burst": 0, 00:34:15.571 "low_priority_weight": 0, 00:34:15.571 "medium_priority_weight": 0, 00:34:15.571 "high_priority_weight": 0, 00:34:15.571 "nvme_adminq_poll_period_us": 10000, 00:34:15.571 "nvme_ioq_poll_period_us": 0, 00:34:15.571 "io_queue_requests": 0, 00:34:15.571 "delay_cmd_submit": true, 00:34:15.571 "transport_retry_count": 4, 00:34:15.571 "bdev_retry_count": 3, 00:34:15.571 "transport_ack_timeout": 0, 00:34:15.571 "ctrlr_loss_timeout_sec": 0, 00:34:15.571 "reconnect_delay_sec": 0, 00:34:15.571 "fast_io_fail_timeout_sec": 0, 00:34:15.571 "disable_auto_failback": false, 00:34:15.571 "generate_uuids": false, 00:34:15.571 "transport_tos": 0, 00:34:15.571 "nvme_error_stat": false, 00:34:15.571 "rdma_srq_size": 0, 00:34:15.571 "io_path_stat": false, 00:34:15.571 "allow_accel_sequence": false, 00:34:15.571 "rdma_max_cq_size": 0, 00:34:15.571 "rdma_cm_event_timeout_ms": 0, 00:34:15.571 "dhchap_digests": [ 00:34:15.571 "sha256", 00:34:15.571 "sha384", 00:34:15.571 "sha512" 00:34:15.571 ], 00:34:15.571 "dhchap_dhgroups": [ 00:34:15.571 "null", 00:34:15.571 "ffdhe2048", 00:34:15.571 "ffdhe3072", 00:34:15.571 "ffdhe4096", 00:34:15.571 "ffdhe6144", 00:34:15.571 "ffdhe8192" 00:34:15.571 ] 00:34:15.571 } 00:34:15.571 }, 00:34:15.571 { 00:34:15.571 "method": "bdev_nvme_set_hotplug", 00:34:15.571 "params": { 00:34:15.571 "period_us": 100000, 00:34:15.571 "enable": false 00:34:15.571 } 00:34:15.571 }, 00:34:15.571 { 00:34:15.571 "method": "bdev_malloc_create", 00:34:15.571 "params": { 00:34:15.571 "name": "malloc0", 00:34:15.571 "num_blocks": 8192, 00:34:15.571 "block_size": 4096, 00:34:15.571 "physical_block_size": 4096, 00:34:15.571 "uuid": "86afd4ff-1c0f-420b-9f33-b3d648fef2e5", 00:34:15.571 "optimal_io_boundary": 0 00:34:15.571 } 00:34:15.571 }, 00:34:15.571 { 00:34:15.571 "method": "bdev_wait_for_examine" 00:34:15.571 } 00:34:15.571 ] 00:34:15.571 }, 00:34:15.571 { 00:34:15.571 "subsystem": "scsi", 00:34:15.571 "config": null 00:34:15.571 }, 00:34:15.571 { 00:34:15.571 "subsystem": "scheduler", 00:34:15.571 "config": [ 00:34:15.571 { 00:34:15.571 "method": "framework_set_scheduler", 00:34:15.571 "params": { 00:34:15.571 "name": "static" 00:34:15.571 } 00:34:15.571 } 00:34:15.571 ] 00:34:15.571 }, 00:34:15.571 { 00:34:15.571 "subsystem": "vhost_scsi", 00:34:15.571 "config": [] 00:34:15.571 }, 00:34:15.571 { 00:34:15.571 "subsystem": "vhost_blk", 00:34:15.571 "config": [] 00:34:15.571 }, 00:34:15.571 { 00:34:15.571 "subsystem": "ublk", 00:34:15.571 "config": [ 00:34:15.571 { 00:34:15.571 "method": "ublk_create_target", 00:34:15.571 "params": { 00:34:15.571 "cpumask": "1" 00:34:15.571 } 00:34:15.571 }, 00:34:15.571 { 00:34:15.571 "method": "ublk_start_disk", 00:34:15.571 "params": { 00:34:15.571 "bdev_name": "malloc0", 00:34:15.571 "ublk_id": 0, 00:34:15.571 "num_queues": 1, 00:34:15.571 "queue_depth": 128 00:34:15.571 } 00:34:15.571 } 00:34:15.571 ] 00:34:15.571 }, 00:34:15.571 { 00:34:15.571 "subsystem": "nbd", 00:34:15.571 "config": [] 00:34:15.571 }, 00:34:15.571 { 00:34:15.571 "subsystem": "nvmf", 00:34:15.571 "config": [ 00:34:15.571 { 00:34:15.571 "method": "nvmf_set_config", 00:34:15.571 "params": { 00:34:15.571 "discovery_filter": "match_any", 00:34:15.571 "admin_cmd_passthru": { 00:34:15.571 "identify_ctrlr": false 00:34:15.571 } 00:34:15.571 } 00:34:15.571 }, 00:34:15.571 { 00:34:15.571 "method": "nvmf_set_max_subsystems", 00:34:15.571 "params": { 00:34:15.571 "max_subsystems": 1024 00:34:15.571 } 00:34:15.571 }, 00:34:15.571 { 00:34:15.571 "method": "nvmf_set_crdt", 00:34:15.571 "params": { 00:34:15.571 "crdt1": 0, 00:34:15.571 "crdt2": 0, 00:34:15.571 "crdt3": 0 00:34:15.571 } 00:34:15.571 } 00:34:15.571 ] 00:34:15.571 }, 00:34:15.571 { 00:34:15.571 "subsystem": "iscsi", 00:34:15.571 "config": [ 00:34:15.571 { 00:34:15.571 "method": "iscsi_set_options", 00:34:15.571 "params": { 00:34:15.571 "node_base": "iqn.2016-06.io.spdk", 00:34:15.571 "max_sessions": 128, 00:34:15.571 "max_connections_per_session": 2, 00:34:15.571 "max_queue_depth": 64, 00:34:15.571 "default_time2wait": 2, 00:34:15.571 "default_time2retain": 20, 00:34:15.571 "first_burst_length": 8192, 00:34:15.571 "immediate_data": true, 00:34:15.571 "allow_duplicated_isid": false, 00:34:15.571 "error_recovery_level": 0, 00:34:15.571 "nop_timeout": 60, 00:34:15.571 "nop_in_interval": 30, 00:34:15.571 "disable_chap": false, 00:34:15.571 "require_chap": false, 00:34:15.571 "mutual_chap": false, 00:34:15.571 "chap_group": 0, 00:34:15.571 "max_large_datain_per_connection": 64, 00:34:15.571 "max_r2t_per_connection": 4, 00:34:15.571 "pdu_pool_size": 36864, 00:34:15.572 "immediate_data_pool_size": 16384, 00:34:15.572 "data_out_pool_size": 2048 00:34:15.572 } 00:34:15.572 } 00:34:15.572 ] 00:34:15.572 } 00:34:15.572 ] 00:34:15.572 }' 00:34:15.572 07:42:53 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:34:15.572 [2024-07-15 07:42:53.988975] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:34:15.572 [2024-07-15 07:42:53.989508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78123 ] 00:34:15.572 [2024-07-15 07:42:54.171432] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.135 [2024-07-15 07:42:54.481192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:17.070 [2024-07-15 07:42:55.543517] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:34:17.070 [2024-07-15 07:42:55.544887] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:34:17.070 [2024-07-15 07:42:55.550681] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:34:17.070 [2024-07-15 07:42:55.550810] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:34:17.070 [2024-07-15 07:42:55.550868] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:34:17.070 [2024-07-15 07:42:55.550890] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:34:17.070 [2024-07-15 07:42:55.559689] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:34:17.070 [2024-07-15 07:42:55.559727] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:34:17.070 [2024-07-15 07:42:55.566510] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:34:17.070 [2024-07-15 07:42:55.566641] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:34:17.070 [2024-07-15 07:42:55.583578] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:34:17.070 07:42:55 ublk.test_save_ublk_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:17.070 07:42:55 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # return 0 00:34:17.070 07:42:55 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:34:17.070 07:42:55 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.070 07:42:55 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:34:17.070 07:42:55 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:34:17.070 07:42:55 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.070 07:42:55 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:34:17.070 07:42:55 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:34:17.070 07:42:55 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 78123 00:34:17.070 07:42:55 ublk.test_save_ublk_config -- common/autotest_common.sh@948 -- # '[' -z 78123 ']' 00:34:17.070 07:42:55 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # kill -0 78123 00:34:17.070 07:42:55 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # uname 00:34:17.328 07:42:55 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:17.328 07:42:55 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78123 00:34:17.328 killing process with pid 78123 00:34:17.328 07:42:55 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:17.328 07:42:55 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:17.328 07:42:55 ublk.test_save_ublk_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78123' 00:34:17.328 07:42:55 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # kill 78123 00:34:17.328 07:42:55 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # wait 78123 00:34:18.703 [2024-07-15 07:42:57.276266] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:34:18.703 [2024-07-15 07:42:57.311499] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:34:18.703 [2024-07-15 07:42:57.311721] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:34:18.962 [2024-07-15 07:42:57.319555] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:34:18.962 [2024-07-15 07:42:57.319623] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:34:18.962 [2024-07-15 07:42:57.319636] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:34:18.962 [2024-07-15 07:42:57.319671] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:34:18.962 [2024-07-15 07:42:57.319881] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:34:20.338 07:42:58 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:34:20.338 ************************************ 00:34:20.338 END TEST test_save_ublk_config 00:34:20.338 ************************************ 00:34:20.338 00:34:20.338 real 0m9.832s 00:34:20.338 user 0m8.216s 00:34:20.338 sys 0m2.480s 00:34:20.338 07:42:58 ublk.test_save_ublk_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:20.338 07:42:58 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:34:20.338 07:42:58 ublk -- common/autotest_common.sh@1142 -- # return 0 00:34:20.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:20.338 07:42:58 ublk -- ublk/ublk.sh@139 -- # spdk_pid=78203 00:34:20.338 07:42:58 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:34:20.338 07:42:58 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:20.338 07:42:58 ublk -- ublk/ublk.sh@141 -- # waitforlisten 78203 00:34:20.338 07:42:58 ublk -- common/autotest_common.sh@829 -- # '[' -z 78203 ']' 00:34:20.338 07:42:58 ublk -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:20.338 07:42:58 ublk -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:20.338 07:42:58 ublk -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:20.338 07:42:58 ublk -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:20.338 07:42:58 ublk -- common/autotest_common.sh@10 -- # set +x 00:34:20.338 [2024-07-15 07:42:58.937447] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:34:20.338 [2024-07-15 07:42:58.938676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78203 ] 00:34:20.596 [2024-07-15 07:42:59.146646] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:20.854 [2024-07-15 07:42:59.418560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:20.854 [2024-07-15 07:42:59.418576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:21.790 07:43:00 ublk -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:21.790 07:43:00 ublk -- common/autotest_common.sh@862 -- # return 0 00:34:21.790 07:43:00 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:34:21.790 07:43:00 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:21.790 07:43:00 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:21.790 07:43:00 ublk -- common/autotest_common.sh@10 -- # set +x 00:34:21.790 ************************************ 00:34:21.790 START TEST test_create_ublk 00:34:21.790 ************************************ 00:34:21.790 07:43:00 ublk.test_create_ublk -- common/autotest_common.sh@1123 -- # test_create_ublk 00:34:21.790 07:43:00 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:34:21.790 07:43:00 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.790 07:43:00 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:34:21.790 [2024-07-15 07:43:00.337492] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:34:21.790 [2024-07-15 07:43:00.340690] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:34:21.790 07:43:00 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.790 07:43:00 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:34:21.790 07:43:00 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:34:21.790 07:43:00 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.790 07:43:00 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:34:22.052 07:43:00 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.052 07:43:00 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:34:22.052 07:43:00 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:34:22.052 07:43:00 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.052 07:43:00 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:34:22.052 [2024-07-15 07:43:00.660683] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:34:22.052 [2024-07-15 07:43:00.661243] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:34:22.052 [2024-07-15 07:43:00.661265] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:34:22.052 [2024-07-15 07:43:00.661280] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:34:22.310 [2024-07-15 07:43:00.668976] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:34:22.310 [2024-07-15 07:43:00.669159] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:34:22.310 [2024-07-15 07:43:00.676519] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:34:22.310 [2024-07-15 07:43:00.691762] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:34:22.310 [2024-07-15 07:43:00.706612] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:34:22.310 07:43:00 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.310 07:43:00 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:34:22.310 07:43:00 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:34:22.310 07:43:00 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:34:22.310 07:43:00 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.310 07:43:00 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:34:22.310 07:43:00 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.310 07:43:00 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:34:22.310 { 00:34:22.310 "ublk_device": "/dev/ublkb0", 00:34:22.310 "id": 0, 00:34:22.310 "queue_depth": 512, 00:34:22.310 "num_queues": 4, 00:34:22.310 "bdev_name": "Malloc0" 00:34:22.310 } 00:34:22.310 ]' 00:34:22.310 07:43:00 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:34:22.310 07:43:00 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:34:22.310 07:43:00 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:34:22.310 07:43:00 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:34:22.310 07:43:00 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:34:22.310 07:43:00 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:34:22.310 07:43:00 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:34:22.569 07:43:00 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:34:22.569 07:43:00 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:34:22.569 07:43:00 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:34:22.569 07:43:00 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:34:22.569 07:43:00 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:34:22.569 07:43:00 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:34:22.569 07:43:00 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:34:22.569 07:43:00 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:34:22.569 07:43:00 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:34:22.569 07:43:00 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:34:22.569 07:43:00 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:34:22.569 07:43:00 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:34:22.569 07:43:00 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:34:22.569 07:43:00 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:34:22.569 07:43:00 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:34:22.569 fio: verification read phase will never start because write phase uses all of runtime 00:34:22.569 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:34:22.569 fio-3.35 00:34:22.569 Starting 1 process 00:34:34.814 00:34:34.814 fio_test: (groupid=0, jobs=1): err= 0: pid=78261: Mon Jul 15 07:43:11 2024 00:34:34.814 write: IOPS=11.7k, BW=45.6MiB/s (47.8MB/s)(456MiB/10001msec); 0 zone resets 00:34:34.814 clat (usec): min=53, max=4122, avg=84.36, stdev=125.34 00:34:34.814 lat (usec): min=53, max=4122, avg=85.09, stdev=125.35 00:34:34.814 clat percentiles (usec): 00:34:34.814 | 1.00th=[ 60], 5.00th=[ 71], 10.00th=[ 72], 20.00th=[ 74], 00:34:34.814 | 30.00th=[ 75], 40.00th=[ 76], 50.00th=[ 77], 60.00th=[ 78], 00:34:34.814 | 70.00th=[ 80], 80.00th=[ 83], 90.00th=[ 88], 95.00th=[ 93], 00:34:34.814 | 99.00th=[ 111], 99.50th=[ 122], 99.90th=[ 2671], 99.95th=[ 3163], 00:34:34.814 | 99.99th=[ 3589] 00:34:34.814 bw ( KiB/s): min=44232, max=51088, per=100.00%, avg=46866.95, stdev=1673.42, samples=19 00:34:34.814 iops : min=11058, max=12772, avg=11716.84, stdev=418.34, samples=19 00:34:34.814 lat (usec) : 100=97.76%, 250=1.90%, 500=0.02%, 750=0.02%, 1000=0.03% 00:34:34.814 lat (msec) : 2=0.11%, 4=0.17%, 10=0.01% 00:34:34.814 cpu : usr=2.84%, sys=7.53%, ctx=116750, majf=0, minf=796 00:34:34.814 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:34.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:34.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:34.814 issued rwts: total=0,116747,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:34.814 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:34.814 00:34:34.814 Run status group 0 (all jobs): 00:34:34.814 WRITE: bw=45.6MiB/s (47.8MB/s), 45.6MiB/s-45.6MiB/s (47.8MB/s-47.8MB/s), io=456MiB (478MB), run=10001-10001msec 00:34:34.814 00:34:34.814 Disk stats (read/write): 00:34:34.814 ublkb0: ios=0/115606, merge=0/0, ticks=0/8930, in_queue=8931, util=99.09% 00:34:34.814 07:43:11 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:34:34.814 07:43:11 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.814 07:43:11 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:34:34.814 [2024-07-15 07:43:11.240726] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:34:34.815 [2024-07-15 07:43:11.283578] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:34:34.815 [2024-07-15 07:43:11.284803] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:34:34.815 [2024-07-15 07:43:11.291532] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:34:34.815 [2024-07-15 07:43:11.291879] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:34:34.815 [2024-07-15 07:43:11.291909] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:34:34.815 07:43:11 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.815 07:43:11 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:34:34.815 07:43:11 ublk.test_create_ublk -- common/autotest_common.sh@648 -- # local es=0 00:34:34.815 07:43:11 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:34:34.815 07:43:11 ublk.test_create_ublk -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:34.815 07:43:11 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:34.815 07:43:11 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:34.815 07:43:11 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:34.815 07:43:11 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # rpc_cmd ublk_stop_disk 0 00:34:34.815 07:43:11 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.815 07:43:11 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:34:34.815 [2024-07-15 07:43:11.307630] ublk.c:1071:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:34:34.815 request: 00:34:34.815 { 00:34:34.815 "ublk_id": 0, 00:34:34.815 "method": "ublk_stop_disk", 00:34:34.815 "req_id": 1 00:34:34.815 } 00:34:34.815 Got JSON-RPC error response 00:34:34.815 response: 00:34:34.815 { 00:34:34.815 "code": -19, 00:34:34.815 "message": "No such device" 00:34:34.815 } 00:34:34.815 07:43:11 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:34.815 07:43:11 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # es=1 00:34:34.815 07:43:11 ublk.test_create_ublk -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:34.815 07:43:11 ublk.test_create_ublk -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:34.815 07:43:11 ublk.test_create_ublk -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:34.815 07:43:11 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:34:34.815 07:43:11 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.815 07:43:11 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:34:34.815 [2024-07-15 07:43:11.323631] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:34:34.815 [2024-07-15 07:43:11.331498] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:34:34.815 [2024-07-15 07:43:11.331559] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:34:34.815 07:43:11 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.815 07:43:11 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:34:34.815 07:43:11 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.815 07:43:11 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:34:34.815 07:43:11 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.815 07:43:11 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:34:34.815 07:43:11 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:34:34.815 07:43:11 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.815 07:43:11 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:34:34.815 07:43:11 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.815 07:43:11 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:34:34.815 07:43:11 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:34:34.815 07:43:11 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:34:34.815 07:43:11 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:34:34.815 07:43:11 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.815 07:43:11 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:34:34.815 07:43:11 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.815 07:43:11 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:34:34.815 07:43:11 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:34:34.815 ************************************ 00:34:34.815 END TEST test_create_ublk 00:34:34.815 ************************************ 00:34:34.815 07:43:11 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:34:34.815 00:34:34.815 real 0m11.489s 00:34:34.815 user 0m0.737s 00:34:34.815 sys 0m0.848s 00:34:34.815 07:43:11 ublk.test_create_ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:34.815 07:43:11 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:34:34.815 07:43:11 ublk -- common/autotest_common.sh@1142 -- # return 0 00:34:34.815 07:43:11 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:34:34.815 07:43:11 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:34.815 07:43:11 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:34.815 07:43:11 ublk -- common/autotest_common.sh@10 -- # set +x 00:34:34.815 ************************************ 00:34:34.815 START TEST test_create_multi_ublk 00:34:34.815 ************************************ 00:34:34.815 07:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@1123 -- # test_create_multi_ublk 00:34:34.815 07:43:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:34:34.815 07:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.815 07:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:34:34.815 [2024-07-15 07:43:11.875512] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:34:34.815 [2024-07-15 07:43:11.878650] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:34:34.815 07:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.815 07:43:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:34:34.815 07:43:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:34:34.815 07:43:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:34:34.815 07:43:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:34:34.815 07:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.815 07:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:34:34.815 07:43:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.815 07:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:34:34.815 07:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:34:34.815 07:43:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.815 07:43:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:34:34.815 [2024-07-15 07:43:12.186666] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:34:34.815 [2024-07-15 07:43:12.187318] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:34:34.815 [2024-07-15 07:43:12.187340] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:34:34.815 [2024-07-15 07:43:12.187351] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:34:34.815 [2024-07-15 07:43:12.195034] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:34:34.815 [2024-07-15 07:43:12.195061] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:34:34.815 [2024-07-15 07:43:12.201559] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:34:34.815 [2024-07-15 07:43:12.202398] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:34:34.815 [2024-07-15 07:43:12.219567] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:34:34.815 07:43:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.815 07:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:34:34.815 07:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:34:34.815 07:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:34:34.815 07:43:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.815 07:43:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:34:34.815 07:43:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.815 07:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:34:34.815 07:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:34:34.815 07:43:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.815 07:43:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:34:34.815 [2024-07-15 07:43:12.540730] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:34:34.815 [2024-07-15 07:43:12.541314] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:34:34.815 [2024-07-15 07:43:12.541339] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:34:34.815 [2024-07-15 07:43:12.541354] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:34:34.815 [2024-07-15 07:43:12.548513] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:34:34.815 [2024-07-15 07:43:12.548549] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:34:34.815 [2024-07-15 07:43:12.556498] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:34:34.815 [2024-07-15 07:43:12.557328] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:34:34.815 [2024-07-15 07:43:12.564624] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:34:34.815 07:43:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.815 07:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:34:34.815 07:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:34:34.815 07:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:34:34.815 07:43:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.815 07:43:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:34:34.815 07:43:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.815 07:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:34:34.815 07:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:34:34.815 07:43:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.816 07:43:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:34:34.816 [2024-07-15 07:43:12.890710] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:34:34.816 [2024-07-15 07:43:12.891283] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:34:34.816 [2024-07-15 07:43:12.891330] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:34:34.816 [2024-07-15 07:43:12.891342] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:34:34.816 [2024-07-15 07:43:12.897505] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:34:34.816 [2024-07-15 07:43:12.897534] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:34:34.816 [2024-07-15 07:43:12.905501] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:34:34.816 [2024-07-15 07:43:12.906342] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:34:34.816 [2024-07-15 07:43:12.922477] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:34:34.816 07:43:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.816 07:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:34:34.816 07:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:34:34.816 07:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:34:34.816 07:43:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.816 07:43:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:34:34.816 07:43:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.816 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:34:34.816 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:34:34.816 07:43:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.816 07:43:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:34:34.816 [2024-07-15 07:43:13.239687] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:34:34.816 [2024-07-15 07:43:13.240290] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:34:34.816 [2024-07-15 07:43:13.240315] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:34:34.816 [2024-07-15 07:43:13.240331] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:34:34.816 [2024-07-15 07:43:13.247511] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:34:34.816 [2024-07-15 07:43:13.247548] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:34:34.816 [2024-07-15 07:43:13.255517] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:34:34.816 [2024-07-15 07:43:13.256321] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:34:34.816 [2024-07-15 07:43:13.264526] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:34:34.816 07:43:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.816 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:34:34.816 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:34:34.816 07:43:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.816 07:43:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:34:34.816 07:43:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.816 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:34:34.816 { 00:34:34.816 "ublk_device": "/dev/ublkb0", 00:34:34.816 "id": 0, 00:34:34.816 "queue_depth": 512, 00:34:34.816 "num_queues": 4, 00:34:34.816 "bdev_name": "Malloc0" 00:34:34.816 }, 00:34:34.816 { 00:34:34.816 "ublk_device": "/dev/ublkb1", 00:34:34.816 "id": 1, 00:34:34.816 "queue_depth": 512, 00:34:34.816 "num_queues": 4, 00:34:34.816 "bdev_name": "Malloc1" 00:34:34.816 }, 00:34:34.816 { 00:34:34.816 "ublk_device": "/dev/ublkb2", 00:34:34.816 "id": 2, 00:34:34.816 "queue_depth": 512, 00:34:34.816 "num_queues": 4, 00:34:34.816 "bdev_name": "Malloc2" 00:34:34.816 }, 00:34:34.816 { 00:34:34.816 "ublk_device": "/dev/ublkb3", 00:34:34.816 "id": 3, 00:34:34.816 "queue_depth": 512, 00:34:34.816 "num_queues": 4, 00:34:34.816 "bdev_name": "Malloc3" 00:34:34.816 } 00:34:34.816 ]' 00:34:34.816 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:34:34.816 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:34:34.816 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:34:34.816 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:34:34.816 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:34:34.816 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:34:34.816 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:34:35.073 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:34:35.073 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:34:35.073 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:34:35.073 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:34:35.073 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:34:35.073 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:34:35.073 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:34:35.073 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:34:35.073 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:34:35.073 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:34:35.073 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:34:35.330 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:34:35.330 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:34:35.330 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:34:35.331 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:34:35.331 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:34:35.331 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:34:35.331 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:34:35.331 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:34:35.331 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:34:35.331 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:34:35.331 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:34:35.588 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:34:35.588 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:34:35.588 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:34:35.588 07:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:34:35.588 07:43:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:34:35.588 07:43:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:34:35.588 07:43:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:34:35.588 07:43:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:34:35.588 07:43:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:34:35.588 07:43:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:34:35.588 07:43:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:34:35.588 07:43:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:34:35.588 07:43:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:34:35.846 07:43:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:34:35.846 07:43:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:34:35.846 07:43:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:34:35.846 07:43:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:34:35.846 07:43:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:34:35.846 07:43:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:34:35.846 07:43:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:34:35.846 07:43:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.846 07:43:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:34:35.846 [2024-07-15 07:43:14.308766] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:34:35.846 [2024-07-15 07:43:14.344235] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:34:35.846 [2024-07-15 07:43:14.345569] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:34:35.846 [2024-07-15 07:43:14.351504] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:34:35.846 [2024-07-15 07:43:14.351871] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:34:35.846 [2024-07-15 07:43:14.351896] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:34:35.846 07:43:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.846 07:43:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:34:35.846 07:43:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:34:35.846 07:43:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.846 07:43:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:34:35.846 [2024-07-15 07:43:14.367625] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:34:35.846 [2024-07-15 07:43:14.405132] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:34:35.846 [2024-07-15 07:43:14.406629] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:34:35.846 [2024-07-15 07:43:14.410483] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:34:35.846 [2024-07-15 07:43:14.410830] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:34:35.846 [2024-07-15 07:43:14.410866] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:34:35.846 07:43:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.846 07:43:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:34:35.846 07:43:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:34:35.846 07:43:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.846 07:43:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:34:35.847 [2024-07-15 07:43:14.426626] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:34:35.847 [2024-07-15 07:43:14.456212] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:34:36.105 [2024-07-15 07:43:14.459894] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:34:36.105 [2024-07-15 07:43:14.464472] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:34:36.105 [2024-07-15 07:43:14.464850] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:34:36.105 [2024-07-15 07:43:14.464868] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:34:36.105 07:43:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.105 07:43:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:34:36.105 07:43:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:34:36.105 07:43:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.105 07:43:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:34:36.105 [2024-07-15 07:43:14.469764] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:34:36.105 [2024-07-15 07:43:14.516517] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:34:36.105 [2024-07-15 07:43:14.517746] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:34:36.105 [2024-07-15 07:43:14.524484] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:34:36.105 [2024-07-15 07:43:14.524816] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:34:36.105 [2024-07-15 07:43:14.524839] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:34:36.105 07:43:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.105 07:43:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:34:36.374 [2024-07-15 07:43:14.799637] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:34:36.374 [2024-07-15 07:43:14.807495] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:34:36.374 [2024-07-15 07:43:14.807573] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:34:36.374 07:43:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:34:36.374 07:43:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:34:36.374 07:43:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:34:36.374 07:43:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.374 07:43:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:34:36.631 07:43:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.631 07:43:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:34:36.631 07:43:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:34:36.631 07:43:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.631 07:43:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:34:37.196 07:43:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.196 07:43:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:34:37.196 07:43:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:34:37.196 07:43:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.196 07:43:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:34:37.454 07:43:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.454 07:43:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:34:37.454 07:43:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:34:37.454 07:43:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.454 07:43:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:34:37.712 07:43:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.712 07:43:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:34:37.712 07:43:16 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:34:37.712 07:43:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.712 07:43:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:34:37.712 07:43:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.712 07:43:16 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:34:37.712 07:43:16 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:34:37.971 07:43:16 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:34:37.971 07:43:16 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:34:37.971 07:43:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.971 07:43:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:34:37.971 07:43:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.971 07:43:16 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:34:37.971 07:43:16 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:34:37.971 ************************************ 00:34:37.971 END TEST test_create_multi_ublk 00:34:37.971 ************************************ 00:34:37.971 07:43:16 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:34:37.971 00:34:37.971 real 0m4.559s 00:34:37.971 user 0m1.270s 00:34:37.971 sys 0m0.184s 00:34:37.971 07:43:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:37.971 07:43:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:34:37.971 07:43:16 ublk -- common/autotest_common.sh@1142 -- # return 0 00:34:37.971 07:43:16 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:37.971 07:43:16 ublk -- ublk/ublk.sh@147 -- # cleanup 00:34:37.971 07:43:16 ublk -- ublk/ublk.sh@130 -- # killprocess 78203 00:34:37.971 07:43:16 ublk -- common/autotest_common.sh@948 -- # '[' -z 78203 ']' 00:34:37.971 07:43:16 ublk -- common/autotest_common.sh@952 -- # kill -0 78203 00:34:37.971 07:43:16 ublk -- common/autotest_common.sh@953 -- # uname 00:34:37.971 07:43:16 ublk -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:37.971 07:43:16 ublk -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78203 00:34:37.971 killing process with pid 78203 00:34:37.971 07:43:16 ublk -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:37.971 07:43:16 ublk -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:37.971 07:43:16 ublk -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78203' 00:34:37.971 07:43:16 ublk -- common/autotest_common.sh@967 -- # kill 78203 00:34:37.971 07:43:16 ublk -- common/autotest_common.sh@972 -- # wait 78203 00:34:39.344 [2024-07-15 07:43:17.662315] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:34:39.344 [2024-07-15 07:43:17.662416] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:34:40.719 00:34:40.719 real 0m30.182s 00:34:40.719 user 0m44.423s 00:34:40.719 sys 0m8.998s 00:34:40.719 ************************************ 00:34:40.719 END TEST ublk 00:34:40.719 ************************************ 00:34:40.719 07:43:19 ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:40.719 07:43:19 ublk -- common/autotest_common.sh@10 -- # set +x 00:34:40.719 07:43:19 -- common/autotest_common.sh@1142 -- # return 0 00:34:40.719 07:43:19 -- spdk/autotest.sh@252 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:34:40.719 07:43:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:40.719 07:43:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:40.719 07:43:19 -- common/autotest_common.sh@10 -- # set +x 00:34:40.719 ************************************ 00:34:40.719 START TEST ublk_recovery 00:34:40.719 ************************************ 00:34:40.719 07:43:19 ublk_recovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:34:40.719 * Looking for test storage... 00:34:40.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:34:40.719 07:43:19 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:34:40.719 07:43:19 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:34:40.719 07:43:19 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:34:40.719 07:43:19 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:34:40.719 07:43:19 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:34:40.719 07:43:19 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:34:40.719 07:43:19 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:34:40.719 07:43:19 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:34:40.719 07:43:19 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:34:40.719 07:43:19 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:34:40.719 07:43:19 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=78600 00:34:40.719 07:43:19 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:34:40.719 07:43:19 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:40.719 07:43:19 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 78600 00:34:40.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:40.719 07:43:19 ublk_recovery -- common/autotest_common.sh@829 -- # '[' -z 78600 ']' 00:34:40.719 07:43:19 ublk_recovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:40.719 07:43:19 ublk_recovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:40.719 07:43:19 ublk_recovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:40.719 07:43:19 ublk_recovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:40.719 07:43:19 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:34:40.719 [2024-07-15 07:43:19.271392] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:34:40.719 [2024-07-15 07:43:19.271643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78600 ] 00:34:40.977 [2024-07-15 07:43:19.457213] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:41.235 [2024-07-15 07:43:19.739003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:41.235 [2024-07-15 07:43:19.739004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:42.167 07:43:20 ublk_recovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:42.167 07:43:20 ublk_recovery -- common/autotest_common.sh@862 -- # return 0 00:34:42.167 07:43:20 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:34:42.167 07:43:20 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.167 07:43:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:34:42.167 [2024-07-15 07:43:20.659535] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:34:42.167 [2024-07-15 07:43:20.662769] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:34:42.167 07:43:20 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.167 07:43:20 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:34:42.167 07:43:20 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.167 07:43:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:34:42.424 malloc0 00:34:42.424 07:43:20 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.424 07:43:20 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:34:42.424 07:43:20 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.424 07:43:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:34:42.424 [2024-07-15 07:43:20.825984] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:34:42.424 [2024-07-15 07:43:20.826134] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:34:42.424 [2024-07-15 07:43:20.826154] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:34:42.424 [2024-07-15 07:43:20.826169] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:34:42.424 [2024-07-15 07:43:20.833651] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:34:42.424 [2024-07-15 07:43:20.833685] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:34:42.424 [2024-07-15 07:43:20.840488] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:34:42.424 [2024-07-15 07:43:20.840700] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:34:42.424 [2024-07-15 07:43:20.856519] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:34:42.424 1 00:34:42.424 07:43:20 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.424 07:43:20 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:34:43.366 07:43:21 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=78641 00:34:43.366 07:43:21 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:34:43.366 07:43:21 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:34:43.622 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:43.622 fio-3.35 00:34:43.622 Starting 1 process 00:34:48.883 07:43:26 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 78600 00:34:48.883 07:43:26 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:34:54.144 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 78600 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:34:54.144 07:43:31 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=78747 00:34:54.144 07:43:31 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:54.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:54.144 07:43:31 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 78747 00:34:54.144 07:43:31 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:34:54.144 07:43:31 ublk_recovery -- common/autotest_common.sh@829 -- # '[' -z 78747 ']' 00:34:54.144 07:43:31 ublk_recovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:54.144 07:43:31 ublk_recovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:54.144 07:43:31 ublk_recovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:54.144 07:43:31 ublk_recovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:54.144 07:43:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:34:54.144 [2024-07-15 07:43:31.991976] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:34:54.144 [2024-07-15 07:43:31.992165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78747 ] 00:34:54.144 [2024-07-15 07:43:32.167224] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:54.144 [2024-07-15 07:43:32.440893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:54.144 [2024-07-15 07:43:32.440893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:55.078 07:43:33 ublk_recovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:55.078 07:43:33 ublk_recovery -- common/autotest_common.sh@862 -- # return 0 00:34:55.078 07:43:33 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:34:55.078 07:43:33 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:55.078 07:43:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:34:55.078 [2024-07-15 07:43:33.373509] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:34:55.078 [2024-07-15 07:43:33.376685] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:34:55.078 07:43:33 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:55.078 07:43:33 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:34:55.078 07:43:33 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:55.078 07:43:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:34:55.078 malloc0 00:34:55.078 07:43:33 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:55.078 07:43:33 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:34:55.078 07:43:33 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:55.078 07:43:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:34:55.078 [2024-07-15 07:43:33.539657] ublk.c:2095:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:34:55.078 [2024-07-15 07:43:33.539723] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:34:55.078 [2024-07-15 07:43:33.539738] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:34:55.078 1 00:34:55.078 07:43:33 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:55.078 07:43:33 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 78641 00:34:55.078 [2024-07-15 07:43:33.548479] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:34:55.078 [2024-07-15 07:43:33.548508] ublk.c:2024:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:34:55.078 [2024-07-15 07:43:33.548622] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:35:21.603 [2024-07-15 07:43:57.245564] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:35:21.603 [2024-07-15 07:43:57.254541] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:35:21.603 [2024-07-15 07:43:57.261993] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:35:21.603 [2024-07-15 07:43:57.262043] ublk.c: 378:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:35:43.560 00:35:43.560 fio_test: (groupid=0, jobs=1): err= 0: pid=78644: Mon Jul 15 07:44:22 2024 00:35:43.560 read: IOPS=8880, BW=34.7MiB/s (36.4MB/s)(2081MiB/60002msec) 00:35:43.560 slat (nsec): min=1863, max=266907, avg=6821.62, stdev=3163.92 00:35:43.560 clat (usec): min=1397, max=30400k, avg=7658.84, stdev=355737.49 00:35:43.560 lat (usec): min=1405, max=30400k, avg=7665.66, stdev=355737.49 00:35:43.560 clat percentiles (msec): 00:35:43.560 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 4], 00:35:43.560 | 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 00:35:43.560 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:35:43.560 | 99.00th=[ 7], 99.50th=[ 8], 99.90th=[ 11], 99.95th=[ 14], 00:35:43.560 | 99.99th=[17113] 00:35:43.560 bw ( KiB/s): min=26624, max=82752, per=100.00%, avg=71091.93, stdev=10347.62, samples=59 00:35:43.560 iops : min= 6656, max=20688, avg=17772.97, stdev=2586.90, samples=59 00:35:43.560 write: IOPS=8872, BW=34.7MiB/s (36.3MB/s)(2080MiB/60002msec); 0 zone resets 00:35:43.560 slat (nsec): min=1848, max=291635, avg=6960.98, stdev=3373.42 00:35:43.560 clat (usec): min=1262, max=30400k, avg=6745.37, stdev=308928.80 00:35:43.560 lat (usec): min=1280, max=30400k, avg=6752.33, stdev=308928.79 00:35:43.560 clat percentiles (msec): 00:35:43.560 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 4], 20.00th=[ 4], 00:35:43.560 | 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 00:35:43.560 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:35:43.560 | 99.00th=[ 8], 99.50th=[ 8], 99.90th=[ 11], 99.95th=[ 14], 00:35:43.560 | 99.99th=[17113] 00:35:43.560 bw ( KiB/s): min=27168, max=81536, per=100.00%, avg=71003.95, stdev=10301.95, samples=59 00:35:43.560 iops : min= 6792, max=20384, avg=17750.97, stdev=2575.48, samples=59 00:35:43.560 lat (msec) : 2=0.04%, 4=92.45%, 10=7.41%, 20=0.10%, >=2000=0.01% 00:35:43.560 cpu : usr=5.01%, sys=11.29%, ctx=34667, majf=0, minf=13 00:35:43.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:35:43.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:43.560 issued rwts: total=532851,532366,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.560 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:43.560 00:35:43.560 Run status group 0 (all jobs): 00:35:43.560 READ: bw=34.7MiB/s (36.4MB/s), 34.7MiB/s-34.7MiB/s (36.4MB/s-36.4MB/s), io=2081MiB (2183MB), run=60002-60002msec 00:35:43.560 WRITE: bw=34.7MiB/s (36.3MB/s), 34.7MiB/s-34.7MiB/s (36.3MB/s-36.3MB/s), io=2080MiB (2181MB), run=60002-60002msec 00:35:43.560 00:35:43.560 Disk stats (read/write): 00:35:43.560 ublkb1: ios=530675/530108, merge=0/0, ticks=4025522/3472378, in_queue=7497900, util=99.95% 00:35:43.560 07:44:22 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:35:43.560 07:44:22 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.560 07:44:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:35:43.560 [2024-07-15 07:44:22.136044] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:35:43.824 [2024-07-15 07:44:22.181540] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:35:43.824 [2024-07-15 07:44:22.182030] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:35:43.824 [2024-07-15 07:44:22.190584] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:35:43.824 [2024-07-15 07:44:22.190883] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:35:43.824 [2024-07-15 07:44:22.194510] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:35:43.824 07:44:22 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.824 07:44:22 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:35:43.824 07:44:22 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.824 07:44:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:35:43.824 [2024-07-15 07:44:22.198753] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:35:43.824 [2024-07-15 07:44:22.210515] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:35:43.824 [2024-07-15 07:44:22.210739] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:35:43.824 07:44:22 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.824 07:44:22 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:35:43.824 07:44:22 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:35:43.825 07:44:22 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 78747 00:35:43.825 07:44:22 ublk_recovery -- common/autotest_common.sh@948 -- # '[' -z 78747 ']' 00:35:43.825 07:44:22 ublk_recovery -- common/autotest_common.sh@952 -- # kill -0 78747 00:35:43.825 07:44:22 ublk_recovery -- common/autotest_common.sh@953 -- # uname 00:35:43.825 07:44:22 ublk_recovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:43.825 07:44:22 ublk_recovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78747 00:35:43.825 07:44:22 ublk_recovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:43.825 07:44:22 ublk_recovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:43.825 07:44:22 ublk_recovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78747' 00:35:43.825 killing process with pid 78747 00:35:43.825 07:44:22 ublk_recovery -- common/autotest_common.sh@967 -- # kill 78747 00:35:43.825 07:44:22 ublk_recovery -- common/autotest_common.sh@972 -- # wait 78747 00:35:45.206 [2024-07-15 07:44:23.417135] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:35:45.206 [2024-07-15 07:44:23.417448] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:35:46.578 00:35:46.578 real 1m5.889s 00:35:46.578 user 1m50.796s 00:35:46.578 sys 0m19.591s 00:35:46.578 07:44:24 ublk_recovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:46.578 07:44:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:35:46.578 ************************************ 00:35:46.578 END TEST ublk_recovery 00:35:46.578 ************************************ 00:35:46.578 07:44:24 -- common/autotest_common.sh@1142 -- # return 0 00:35:46.578 07:44:24 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:35:46.578 07:44:24 -- spdk/autotest.sh@260 -- # timing_exit lib 00:35:46.578 07:44:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:46.578 07:44:24 -- common/autotest_common.sh@10 -- # set +x 00:35:46.578 07:44:25 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:35:46.578 07:44:25 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:35:46.578 07:44:25 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:35:46.578 07:44:25 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:35:46.578 07:44:25 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:35:46.578 07:44:25 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:35:46.578 07:44:25 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:35:46.578 07:44:25 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:35:46.578 07:44:25 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:35:46.578 07:44:25 -- spdk/autotest.sh@339 -- # '[' 1 -eq 1 ']' 00:35:46.578 07:44:25 -- spdk/autotest.sh@340 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:35:46.578 07:44:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:46.578 07:44:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:46.578 07:44:25 -- common/autotest_common.sh@10 -- # set +x 00:35:46.578 ************************************ 00:35:46.578 START TEST ftl 00:35:46.578 ************************************ 00:35:46.578 07:44:25 ftl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:35:46.578 * Looking for test storage... 00:35:46.578 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:35:46.578 07:44:25 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:35:46.578 07:44:25 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:35:46.578 07:44:25 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:35:46.578 07:44:25 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:35:46.578 07:44:25 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:35:46.578 07:44:25 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:35:46.578 07:44:25 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:46.578 07:44:25 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:35:46.578 07:44:25 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:35:46.578 07:44:25 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:46.578 07:44:25 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:46.578 07:44:25 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:35:46.578 07:44:25 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:35:46.578 07:44:25 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:35:46.578 07:44:25 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:35:46.578 07:44:25 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:35:46.578 07:44:25 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:35:46.578 07:44:25 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:46.578 07:44:25 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:46.578 07:44:25 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:35:46.578 07:44:25 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:35:46.578 07:44:25 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:35:46.578 07:44:25 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:35:46.578 07:44:25 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:35:46.578 07:44:25 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:35:46.578 07:44:25 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:35:46.578 07:44:25 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:35:46.578 07:44:25 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:46.578 07:44:25 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:46.578 07:44:25 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:46.578 07:44:25 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:35:46.578 07:44:25 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:35:46.578 07:44:25 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:35:46.578 07:44:25 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:35:46.578 07:44:25 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:35:47.146 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:47.146 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:35:47.146 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:35:47.146 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:35:47.146 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:35:47.146 07:44:25 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=79523 00:35:47.147 07:44:25 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:35:47.147 07:44:25 ftl -- ftl/ftl.sh@38 -- # waitforlisten 79523 00:35:47.147 07:44:25 ftl -- common/autotest_common.sh@829 -- # '[' -z 79523 ']' 00:35:47.147 07:44:25 ftl -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:47.147 07:44:25 ftl -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:47.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:47.147 07:44:25 ftl -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:47.147 07:44:25 ftl -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:47.147 07:44:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:35:47.460 [2024-07-15 07:44:25.814673] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:35:47.460 [2024-07-15 07:44:25.814911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79523 ] 00:35:47.460 [2024-07-15 07:44:25.993938] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:47.719 [2024-07-15 07:44:26.308162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:48.285 07:44:26 ftl -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:48.285 07:44:26 ftl -- common/autotest_common.sh@862 -- # return 0 00:35:48.285 07:44:26 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:35:48.545 07:44:27 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:35:49.919 07:44:28 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:35:49.919 07:44:28 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:35:50.178 07:44:28 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:35:50.178 07:44:28 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:35:50.178 07:44:28 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:35:50.436 07:44:28 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:35:50.436 07:44:28 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:35:50.436 07:44:28 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:35:50.436 07:44:28 ftl -- ftl/ftl.sh@50 -- # break 00:35:50.436 07:44:28 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:35:50.437 07:44:28 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:35:50.437 07:44:28 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:35:50.437 07:44:28 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:35:50.695 07:44:29 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:35:50.695 07:44:29 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:35:50.695 07:44:29 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:35:50.695 07:44:29 ftl -- ftl/ftl.sh@63 -- # break 00:35:50.695 07:44:29 ftl -- ftl/ftl.sh@66 -- # killprocess 79523 00:35:50.695 07:44:29 ftl -- common/autotest_common.sh@948 -- # '[' -z 79523 ']' 00:35:50.695 07:44:29 ftl -- common/autotest_common.sh@952 -- # kill -0 79523 00:35:50.695 07:44:29 ftl -- common/autotest_common.sh@953 -- # uname 00:35:50.695 07:44:29 ftl -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:50.695 07:44:29 ftl -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79523 00:35:50.695 killing process with pid 79523 00:35:50.695 07:44:29 ftl -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:50.695 07:44:29 ftl -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:50.695 07:44:29 ftl -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79523' 00:35:50.695 07:44:29 ftl -- common/autotest_common.sh@967 -- # kill 79523 00:35:50.695 07:44:29 ftl -- common/autotest_common.sh@972 -- # wait 79523 00:35:53.225 07:44:31 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:35:53.225 07:44:31 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:35:53.225 07:44:31 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:35:53.225 07:44:31 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:53.225 07:44:31 ftl -- common/autotest_common.sh@10 -- # set +x 00:35:53.225 ************************************ 00:35:53.225 START TEST ftl_fio_basic 00:35:53.225 ************************************ 00:35:53.225 07:44:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:35:53.484 * Looking for test storage... 00:35:53.484 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=79675 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 79675 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- common/autotest_common.sh@829 -- # '[' -z 79675 ']' 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:53.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:53.484 07:44:31 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:35:53.484 [2024-07-15 07:44:32.038948] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:35:53.484 [2024-07-15 07:44:32.039159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79675 ] 00:35:53.743 [2024-07-15 07:44:32.219918] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:54.001 [2024-07-15 07:44:32.536502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:54.001 [2024-07-15 07:44:32.536641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:54.001 [2024-07-15 07:44:32.536657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:54.934 07:44:33 ftl.ftl_fio_basic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:54.934 07:44:33 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # return 0 00:35:54.934 07:44:33 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:35:54.934 07:44:33 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:35:54.934 07:44:33 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:35:54.934 07:44:33 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:35:54.934 07:44:33 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:35:54.934 07:44:33 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:35:55.497 07:44:33 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:35:55.498 07:44:33 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:35:55.498 07:44:33 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:35:55.498 07:44:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:35:55.498 07:44:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:35:55.498 07:44:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:35:55.498 07:44:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:35:55.498 07:44:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:35:55.498 07:44:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:35:55.498 { 00:35:55.498 "name": "nvme0n1", 00:35:55.498 "aliases": [ 00:35:55.498 "408da4db-7d6b-4a6a-9246-b2492d1a43ec" 00:35:55.498 ], 00:35:55.498 "product_name": "NVMe disk", 00:35:55.498 "block_size": 4096, 00:35:55.498 "num_blocks": 1310720, 00:35:55.498 "uuid": "408da4db-7d6b-4a6a-9246-b2492d1a43ec", 00:35:55.498 "assigned_rate_limits": { 00:35:55.498 "rw_ios_per_sec": 0, 00:35:55.498 "rw_mbytes_per_sec": 0, 00:35:55.498 "r_mbytes_per_sec": 0, 00:35:55.498 "w_mbytes_per_sec": 0 00:35:55.498 }, 00:35:55.498 "claimed": false, 00:35:55.498 "zoned": false, 00:35:55.498 "supported_io_types": { 00:35:55.498 "read": true, 00:35:55.498 "write": true, 00:35:55.498 "unmap": true, 00:35:55.498 "flush": true, 00:35:55.498 "reset": true, 00:35:55.498 "nvme_admin": true, 00:35:55.498 "nvme_io": true, 00:35:55.498 "nvme_io_md": false, 00:35:55.498 "write_zeroes": true, 00:35:55.498 "zcopy": false, 00:35:55.498 "get_zone_info": false, 00:35:55.498 "zone_management": false, 00:35:55.498 "zone_append": false, 00:35:55.498 "compare": true, 00:35:55.498 "compare_and_write": false, 00:35:55.498 "abort": true, 00:35:55.498 "seek_hole": false, 00:35:55.498 "seek_data": false, 00:35:55.498 "copy": true, 00:35:55.498 "nvme_iov_md": false 00:35:55.498 }, 00:35:55.498 "driver_specific": { 00:35:55.498 "nvme": [ 00:35:55.498 { 00:35:55.498 "pci_address": "0000:00:11.0", 00:35:55.498 "trid": { 00:35:55.498 "trtype": "PCIe", 00:35:55.498 "traddr": "0000:00:11.0" 00:35:55.498 }, 00:35:55.498 "ctrlr_data": { 00:35:55.498 "cntlid": 0, 00:35:55.498 "vendor_id": "0x1b36", 00:35:55.498 "model_number": "QEMU NVMe Ctrl", 00:35:55.498 "serial_number": "12341", 00:35:55.498 "firmware_revision": "8.0.0", 00:35:55.498 "subnqn": "nqn.2019-08.org.qemu:12341", 00:35:55.498 "oacs": { 00:35:55.498 "security": 0, 00:35:55.498 "format": 1, 00:35:55.498 "firmware": 0, 00:35:55.498 "ns_manage": 1 00:35:55.498 }, 00:35:55.498 "multi_ctrlr": false, 00:35:55.498 "ana_reporting": false 00:35:55.498 }, 00:35:55.498 "vs": { 00:35:55.498 "nvme_version": "1.4" 00:35:55.498 }, 00:35:55.498 "ns_data": { 00:35:55.498 "id": 1, 00:35:55.498 "can_share": false 00:35:55.498 } 00:35:55.498 } 00:35:55.498 ], 00:35:55.498 "mp_policy": "active_passive" 00:35:55.498 } 00:35:55.498 } 00:35:55.498 ]' 00:35:55.498 07:44:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:35:55.776 07:44:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:35:55.776 07:44:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:35:55.776 07:44:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:35:55.776 07:44:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:35:55.776 07:44:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:35:55.776 07:44:34 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:35:55.776 07:44:34 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:35:55.776 07:44:34 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:35:55.776 07:44:34 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:35:55.776 07:44:34 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:35:56.034 07:44:34 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:35:56.034 07:44:34 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:35:56.292 07:44:34 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=02f64569-b618-4658-9a92-f5bd1c416ce5 00:35:56.292 07:44:34 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 02f64569-b618-4658-9a92-f5bd1c416ce5 00:35:56.549 07:44:35 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=ec64e7c3-e822-4a5c-8799-889959a2c36c 00:35:56.549 07:44:35 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ec64e7c3-e822-4a5c-8799-889959a2c36c 00:35:56.549 07:44:35 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:35:56.549 07:44:35 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:35:56.549 07:44:35 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=ec64e7c3-e822-4a5c-8799-889959a2c36c 00:35:56.550 07:44:35 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:35:56.550 07:44:35 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size ec64e7c3-e822-4a5c-8799-889959a2c36c 00:35:56.550 07:44:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=ec64e7c3-e822-4a5c-8799-889959a2c36c 00:35:56.550 07:44:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:35:56.550 07:44:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:35:56.550 07:44:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:35:56.550 07:44:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ec64e7c3-e822-4a5c-8799-889959a2c36c 00:35:56.806 07:44:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:35:56.806 { 00:35:56.806 "name": "ec64e7c3-e822-4a5c-8799-889959a2c36c", 00:35:56.806 "aliases": [ 00:35:56.806 "lvs/nvme0n1p0" 00:35:56.806 ], 00:35:56.806 "product_name": "Logical Volume", 00:35:56.806 "block_size": 4096, 00:35:56.806 "num_blocks": 26476544, 00:35:56.806 "uuid": "ec64e7c3-e822-4a5c-8799-889959a2c36c", 00:35:56.806 "assigned_rate_limits": { 00:35:56.806 "rw_ios_per_sec": 0, 00:35:56.806 "rw_mbytes_per_sec": 0, 00:35:56.806 "r_mbytes_per_sec": 0, 00:35:56.806 "w_mbytes_per_sec": 0 00:35:56.806 }, 00:35:56.806 "claimed": false, 00:35:56.806 "zoned": false, 00:35:56.806 "supported_io_types": { 00:35:56.806 "read": true, 00:35:56.806 "write": true, 00:35:56.806 "unmap": true, 00:35:56.806 "flush": false, 00:35:56.806 "reset": true, 00:35:56.806 "nvme_admin": false, 00:35:56.806 "nvme_io": false, 00:35:56.806 "nvme_io_md": false, 00:35:56.806 "write_zeroes": true, 00:35:56.806 "zcopy": false, 00:35:56.806 "get_zone_info": false, 00:35:56.806 "zone_management": false, 00:35:56.806 "zone_append": false, 00:35:56.806 "compare": false, 00:35:56.806 "compare_and_write": false, 00:35:56.806 "abort": false, 00:35:56.806 "seek_hole": true, 00:35:56.806 "seek_data": true, 00:35:56.806 "copy": false, 00:35:56.806 "nvme_iov_md": false 00:35:56.806 }, 00:35:56.806 "driver_specific": { 00:35:56.806 "lvol": { 00:35:56.806 "lvol_store_uuid": "02f64569-b618-4658-9a92-f5bd1c416ce5", 00:35:56.806 "base_bdev": "nvme0n1", 00:35:56.806 "thin_provision": true, 00:35:56.806 "num_allocated_clusters": 0, 00:35:56.806 "snapshot": false, 00:35:56.806 "clone": false, 00:35:56.806 "esnap_clone": false 00:35:56.806 } 00:35:56.806 } 00:35:56.806 } 00:35:56.806 ]' 00:35:56.806 07:44:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:35:56.806 07:44:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:35:56.806 07:44:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:35:57.063 07:44:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:35:57.063 07:44:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:35:57.063 07:44:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:35:57.063 07:44:35 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:35:57.063 07:44:35 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:35:57.063 07:44:35 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:35:57.321 07:44:35 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:35:57.321 07:44:35 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:35:57.321 07:44:35 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size ec64e7c3-e822-4a5c-8799-889959a2c36c 00:35:57.321 07:44:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=ec64e7c3-e822-4a5c-8799-889959a2c36c 00:35:57.321 07:44:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:35:57.321 07:44:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:35:57.321 07:44:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:35:57.321 07:44:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ec64e7c3-e822-4a5c-8799-889959a2c36c 00:35:57.579 07:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:35:57.579 { 00:35:57.579 "name": "ec64e7c3-e822-4a5c-8799-889959a2c36c", 00:35:57.579 "aliases": [ 00:35:57.579 "lvs/nvme0n1p0" 00:35:57.579 ], 00:35:57.579 "product_name": "Logical Volume", 00:35:57.579 "block_size": 4096, 00:35:57.579 "num_blocks": 26476544, 00:35:57.579 "uuid": "ec64e7c3-e822-4a5c-8799-889959a2c36c", 00:35:57.579 "assigned_rate_limits": { 00:35:57.579 "rw_ios_per_sec": 0, 00:35:57.579 "rw_mbytes_per_sec": 0, 00:35:57.579 "r_mbytes_per_sec": 0, 00:35:57.579 "w_mbytes_per_sec": 0 00:35:57.579 }, 00:35:57.579 "claimed": false, 00:35:57.579 "zoned": false, 00:35:57.579 "supported_io_types": { 00:35:57.579 "read": true, 00:35:57.579 "write": true, 00:35:57.579 "unmap": true, 00:35:57.579 "flush": false, 00:35:57.579 "reset": true, 00:35:57.579 "nvme_admin": false, 00:35:57.579 "nvme_io": false, 00:35:57.579 "nvme_io_md": false, 00:35:57.579 "write_zeroes": true, 00:35:57.579 "zcopy": false, 00:35:57.579 "get_zone_info": false, 00:35:57.579 "zone_management": false, 00:35:57.579 "zone_append": false, 00:35:57.579 "compare": false, 00:35:57.579 "compare_and_write": false, 00:35:57.579 "abort": false, 00:35:57.579 "seek_hole": true, 00:35:57.579 "seek_data": true, 00:35:57.579 "copy": false, 00:35:57.579 "nvme_iov_md": false 00:35:57.579 }, 00:35:57.579 "driver_specific": { 00:35:57.579 "lvol": { 00:35:57.579 "lvol_store_uuid": "02f64569-b618-4658-9a92-f5bd1c416ce5", 00:35:57.579 "base_bdev": "nvme0n1", 00:35:57.579 "thin_provision": true, 00:35:57.579 "num_allocated_clusters": 0, 00:35:57.579 "snapshot": false, 00:35:57.579 "clone": false, 00:35:57.579 "esnap_clone": false 00:35:57.579 } 00:35:57.579 } 00:35:57.579 } 00:35:57.579 ]' 00:35:57.579 07:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:35:57.579 07:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:35:57.579 07:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:35:57.579 07:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:35:57.579 07:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:35:57.579 07:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:35:57.579 07:44:36 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:35:57.579 07:44:36 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:35:57.837 07:44:36 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:35:57.837 07:44:36 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:35:57.837 07:44:36 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:35:57.837 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:35:57.837 07:44:36 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size ec64e7c3-e822-4a5c-8799-889959a2c36c 00:35:57.837 07:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=ec64e7c3-e822-4a5c-8799-889959a2c36c 00:35:57.837 07:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:35:57.837 07:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:35:57.837 07:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:35:57.837 07:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ec64e7c3-e822-4a5c-8799-889959a2c36c 00:35:58.095 07:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:35:58.095 { 00:35:58.095 "name": "ec64e7c3-e822-4a5c-8799-889959a2c36c", 00:35:58.095 "aliases": [ 00:35:58.095 "lvs/nvme0n1p0" 00:35:58.095 ], 00:35:58.095 "product_name": "Logical Volume", 00:35:58.095 "block_size": 4096, 00:35:58.095 "num_blocks": 26476544, 00:35:58.095 "uuid": "ec64e7c3-e822-4a5c-8799-889959a2c36c", 00:35:58.095 "assigned_rate_limits": { 00:35:58.095 "rw_ios_per_sec": 0, 00:35:58.095 "rw_mbytes_per_sec": 0, 00:35:58.095 "r_mbytes_per_sec": 0, 00:35:58.095 "w_mbytes_per_sec": 0 00:35:58.095 }, 00:35:58.095 "claimed": false, 00:35:58.095 "zoned": false, 00:35:58.095 "supported_io_types": { 00:35:58.095 "read": true, 00:35:58.095 "write": true, 00:35:58.095 "unmap": true, 00:35:58.095 "flush": false, 00:35:58.095 "reset": true, 00:35:58.095 "nvme_admin": false, 00:35:58.095 "nvme_io": false, 00:35:58.095 "nvme_io_md": false, 00:35:58.095 "write_zeroes": true, 00:35:58.095 "zcopy": false, 00:35:58.095 "get_zone_info": false, 00:35:58.095 "zone_management": false, 00:35:58.095 "zone_append": false, 00:35:58.095 "compare": false, 00:35:58.095 "compare_and_write": false, 00:35:58.095 "abort": false, 00:35:58.095 "seek_hole": true, 00:35:58.095 "seek_data": true, 00:35:58.095 "copy": false, 00:35:58.095 "nvme_iov_md": false 00:35:58.095 }, 00:35:58.095 "driver_specific": { 00:35:58.095 "lvol": { 00:35:58.095 "lvol_store_uuid": "02f64569-b618-4658-9a92-f5bd1c416ce5", 00:35:58.095 "base_bdev": "nvme0n1", 00:35:58.095 "thin_provision": true, 00:35:58.095 "num_allocated_clusters": 0, 00:35:58.095 "snapshot": false, 00:35:58.095 "clone": false, 00:35:58.095 "esnap_clone": false 00:35:58.095 } 00:35:58.095 } 00:35:58.095 } 00:35:58.095 ]' 00:35:58.095 07:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:35:58.353 07:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:35:58.353 07:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:35:58.353 07:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:35:58.353 07:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:35:58.353 07:44:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:35:58.353 07:44:36 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:35:58.353 07:44:36 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:35:58.353 07:44:36 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ec64e7c3-e822-4a5c-8799-889959a2c36c -c nvc0n1p0 --l2p_dram_limit 60 00:35:58.612 [2024-07-15 07:44:37.072675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:58.612 [2024-07-15 07:44:37.072754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:58.612 [2024-07-15 07:44:37.072779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:35:58.612 [2024-07-15 07:44:37.072795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:58.612 [2024-07-15 07:44:37.072909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:58.612 [2024-07-15 07:44:37.072934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:58.612 [2024-07-15 07:44:37.072948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:35:58.612 [2024-07-15 07:44:37.072963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:58.612 [2024-07-15 07:44:37.073008] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:58.612 [2024-07-15 07:44:37.078160] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:58.612 [2024-07-15 07:44:37.078205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:58.612 [2024-07-15 07:44:37.078231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:58.612 [2024-07-15 07:44:37.078246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.210 ms 00:35:58.612 [2024-07-15 07:44:37.078261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:58.612 [2024-07-15 07:44:37.078513] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4fb0c9d1-bcbd-4135-b28f-af7f3fa48d68 00:35:58.612 [2024-07-15 07:44:37.081052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:58.612 [2024-07-15 07:44:37.081094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:35:58.612 [2024-07-15 07:44:37.081123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:35:58.612 [2024-07-15 07:44:37.081137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:58.612 [2024-07-15 07:44:37.095416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:58.612 [2024-07-15 07:44:37.095651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:58.612 [2024-07-15 07:44:37.095791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.157 ms 00:35:58.612 [2024-07-15 07:44:37.095938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:58.612 [2024-07-15 07:44:37.096154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:58.612 [2024-07-15 07:44:37.096184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:58.612 [2024-07-15 07:44:37.096205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:35:58.612 [2024-07-15 07:44:37.096217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:58.612 [2024-07-15 07:44:37.096345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:58.612 [2024-07-15 07:44:37.096364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:58.612 [2024-07-15 07:44:37.096381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:35:58.612 [2024-07-15 07:44:37.096394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:58.612 [2024-07-15 07:44:37.096479] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:58.612 [2024-07-15 07:44:37.102534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:58.612 [2024-07-15 07:44:37.102579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:58.612 [2024-07-15 07:44:37.102613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.093 ms 00:35:58.612 [2024-07-15 07:44:37.102628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:58.612 [2024-07-15 07:44:37.102685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:58.612 [2024-07-15 07:44:37.102706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:58.612 [2024-07-15 07:44:37.102720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:35:58.612 [2024-07-15 07:44:37.102734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:58.612 [2024-07-15 07:44:37.102793] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:35:58.612 [2024-07-15 07:44:37.103013] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:58.612 [2024-07-15 07:44:37.103038] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:58.612 [2024-07-15 07:44:37.103062] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:35:58.612 [2024-07-15 07:44:37.103078] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:58.612 [2024-07-15 07:44:37.103096] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:58.612 [2024-07-15 07:44:37.103109] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:35:58.612 [2024-07-15 07:44:37.103125] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:58.612 [2024-07-15 07:44:37.103137] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:58.612 [2024-07-15 07:44:37.103155] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:58.612 [2024-07-15 07:44:37.103169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:58.612 [2024-07-15 07:44:37.103184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:58.612 [2024-07-15 07:44:37.103196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:35:58.612 [2024-07-15 07:44:37.103210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:58.612 [2024-07-15 07:44:37.103319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:58.612 [2024-07-15 07:44:37.103345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:58.612 [2024-07-15 07:44:37.103359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:35:58.612 [2024-07-15 07:44:37.103372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:58.612 [2024-07-15 07:44:37.103531] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:58.612 [2024-07-15 07:44:37.103562] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:58.612 [2024-07-15 07:44:37.103577] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:58.612 [2024-07-15 07:44:37.103592] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:58.612 [2024-07-15 07:44:37.103604] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:58.613 [2024-07-15 07:44:37.103618] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:58.613 [2024-07-15 07:44:37.103629] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:35:58.613 [2024-07-15 07:44:37.103643] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:58.613 [2024-07-15 07:44:37.103653] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:35:58.613 [2024-07-15 07:44:37.103667] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:58.613 [2024-07-15 07:44:37.103678] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:58.613 [2024-07-15 07:44:37.103692] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:35:58.613 [2024-07-15 07:44:37.103703] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:58.613 [2024-07-15 07:44:37.103718] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:58.613 [2024-07-15 07:44:37.103730] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:35:58.613 [2024-07-15 07:44:37.103743] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:58.613 [2024-07-15 07:44:37.103754] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:58.613 [2024-07-15 07:44:37.103770] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:35:58.613 [2024-07-15 07:44:37.103781] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:58.613 [2024-07-15 07:44:37.103795] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:58.613 [2024-07-15 07:44:37.103814] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:35:58.613 [2024-07-15 07:44:37.103836] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:58.613 [2024-07-15 07:44:37.103849] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:58.613 [2024-07-15 07:44:37.103868] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:35:58.613 [2024-07-15 07:44:37.103880] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:58.613 [2024-07-15 07:44:37.103897] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:58.613 [2024-07-15 07:44:37.103908] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:35:58.613 [2024-07-15 07:44:37.103925] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:58.613 [2024-07-15 07:44:37.103937] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:58.613 [2024-07-15 07:44:37.103955] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:35:58.613 [2024-07-15 07:44:37.103967] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:58.613 [2024-07-15 07:44:37.103986] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:58.613 [2024-07-15 07:44:37.103998] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:35:58.613 [2024-07-15 07:44:37.104020] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:58.613 [2024-07-15 07:44:37.104033] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:58.613 [2024-07-15 07:44:37.104047] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:35:58.613 [2024-07-15 07:44:37.104058] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:58.613 [2024-07-15 07:44:37.104072] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:58.613 [2024-07-15 07:44:37.104084] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:35:58.613 [2024-07-15 07:44:37.104101] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:58.613 [2024-07-15 07:44:37.104113] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:58.613 [2024-07-15 07:44:37.104127] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:35:58.613 [2024-07-15 07:44:37.104138] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:58.613 [2024-07-15 07:44:37.104152] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:58.613 [2024-07-15 07:44:37.104164] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:58.613 [2024-07-15 07:44:37.104200] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:58.613 [2024-07-15 07:44:37.104213] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:58.613 [2024-07-15 07:44:37.104229] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:58.613 [2024-07-15 07:44:37.104241] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:58.613 [2024-07-15 07:44:37.104258] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:58.613 [2024-07-15 07:44:37.104270] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:58.613 [2024-07-15 07:44:37.104284] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:58.613 [2024-07-15 07:44:37.104302] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:58.613 [2024-07-15 07:44:37.104322] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:58.613 [2024-07-15 07:44:37.104338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:58.613 [2024-07-15 07:44:37.104356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:35:58.613 [2024-07-15 07:44:37.104369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:35:58.613 [2024-07-15 07:44:37.104384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:35:58.613 [2024-07-15 07:44:37.104396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:35:58.613 [2024-07-15 07:44:37.104411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:35:58.613 [2024-07-15 07:44:37.104427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:35:58.613 [2024-07-15 07:44:37.104442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:35:58.613 [2024-07-15 07:44:37.104467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:35:58.613 [2024-07-15 07:44:37.104486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:35:58.613 [2024-07-15 07:44:37.104498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:35:58.613 [2024-07-15 07:44:37.104516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:35:58.613 [2024-07-15 07:44:37.104528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:35:58.613 [2024-07-15 07:44:37.104543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:35:58.613 [2024-07-15 07:44:37.104556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:35:58.613 [2024-07-15 07:44:37.104585] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:58.613 [2024-07-15 07:44:37.104604] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:58.613 [2024-07-15 07:44:37.104619] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:58.613 [2024-07-15 07:44:37.104632] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:58.613 [2024-07-15 07:44:37.104646] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:58.613 [2024-07-15 07:44:37.104658] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:58.613 [2024-07-15 07:44:37.104675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:58.613 [2024-07-15 07:44:37.104688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:58.613 [2024-07-15 07:44:37.104702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.225 ms 00:35:58.613 [2024-07-15 07:44:37.104715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:58.613 [2024-07-15 07:44:37.104804] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:35:58.613 [2024-07-15 07:44:37.104822] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:36:02.854 [2024-07-15 07:44:41.055155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:02.854 [2024-07-15 07:44:41.055253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:36:02.854 [2024-07-15 07:44:41.055283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3950.354 ms 00:36:02.854 [2024-07-15 07:44:41.055297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:02.854 [2024-07-15 07:44:41.100801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:02.854 [2024-07-15 07:44:41.100902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:02.854 [2024-07-15 07:44:41.100930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.153 ms 00:36:02.854 [2024-07-15 07:44:41.100945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:02.854 [2024-07-15 07:44:41.101176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:02.854 [2024-07-15 07:44:41.101197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:36:02.854 [2024-07-15 07:44:41.101215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:36:02.854 [2024-07-15 07:44:41.101229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:02.854 [2024-07-15 07:44:41.157932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:02.854 [2024-07-15 07:44:41.158022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:02.854 [2024-07-15 07:44:41.158051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.624 ms 00:36:02.854 [2024-07-15 07:44:41.158064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:02.854 [2024-07-15 07:44:41.158161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:02.854 [2024-07-15 07:44:41.158179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:02.854 [2024-07-15 07:44:41.158196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:36:02.854 [2024-07-15 07:44:41.158209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:02.854 [2024-07-15 07:44:41.159112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:02.854 [2024-07-15 07:44:41.159147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:02.854 [2024-07-15 07:44:41.159171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.785 ms 00:36:02.854 [2024-07-15 07:44:41.159183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:02.854 [2024-07-15 07:44:41.159409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:02.854 [2024-07-15 07:44:41.159429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:02.854 [2024-07-15 07:44:41.159446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:36:02.854 [2024-07-15 07:44:41.159473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:02.854 [2024-07-15 07:44:41.184784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:02.854 [2024-07-15 07:44:41.184870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:02.854 [2024-07-15 07:44:41.184898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.267 ms 00:36:02.854 [2024-07-15 07:44:41.184912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:02.854 [2024-07-15 07:44:41.203746] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:36:02.854 [2024-07-15 07:44:41.231576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:02.854 [2024-07-15 07:44:41.231685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:36:02.854 [2024-07-15 07:44:41.231715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.461 ms 00:36:02.854 [2024-07-15 07:44:41.231731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:02.854 [2024-07-15 07:44:41.300145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:02.854 [2024-07-15 07:44:41.300256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:36:02.854 [2024-07-15 07:44:41.300280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.329 ms 00:36:02.855 [2024-07-15 07:44:41.300296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:02.855 [2024-07-15 07:44:41.300638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:02.855 [2024-07-15 07:44:41.300666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:36:02.855 [2024-07-15 07:44:41.300681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:36:02.855 [2024-07-15 07:44:41.300700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:02.855 [2024-07-15 07:44:41.337263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:02.855 [2024-07-15 07:44:41.337389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:36:02.855 [2024-07-15 07:44:41.337414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.458 ms 00:36:02.855 [2024-07-15 07:44:41.337431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:02.855 [2024-07-15 07:44:41.373797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:02.855 [2024-07-15 07:44:41.373918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:36:02.855 [2024-07-15 07:44:41.373944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.239 ms 00:36:02.855 [2024-07-15 07:44:41.373961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:02.855 [2024-07-15 07:44:41.374992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:02.855 [2024-07-15 07:44:41.375032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:36:02.855 [2024-07-15 07:44:41.375050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.928 ms 00:36:02.855 [2024-07-15 07:44:41.375065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.113 [2024-07-15 07:44:41.478117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.113 [2024-07-15 07:44:41.478248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:36:03.113 [2024-07-15 07:44:41.478277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.923 ms 00:36:03.113 [2024-07-15 07:44:41.478300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.113 [2024-07-15 07:44:41.516048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.113 [2024-07-15 07:44:41.516188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:36:03.113 [2024-07-15 07:44:41.516214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.638 ms 00:36:03.113 [2024-07-15 07:44:41.516231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.113 [2024-07-15 07:44:41.553431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.113 [2024-07-15 07:44:41.553559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:36:03.113 [2024-07-15 07:44:41.553584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.092 ms 00:36:03.113 [2024-07-15 07:44:41.553600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.113 [2024-07-15 07:44:41.588972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.113 [2024-07-15 07:44:41.589079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:36:03.113 [2024-07-15 07:44:41.589105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.257 ms 00:36:03.113 [2024-07-15 07:44:41.589121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.113 [2024-07-15 07:44:41.589232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.113 [2024-07-15 07:44:41.589260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:36:03.113 [2024-07-15 07:44:41.589280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:36:03.113 [2024-07-15 07:44:41.589300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.113 [2024-07-15 07:44:41.589534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.113 [2024-07-15 07:44:41.589562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:36:03.113 [2024-07-15 07:44:41.589577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:36:03.113 [2024-07-15 07:44:41.589601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.113 [2024-07-15 07:44:41.591315] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4518.010 ms, result 0 00:36:03.113 { 00:36:03.113 "name": "ftl0", 00:36:03.113 "uuid": "4fb0c9d1-bcbd-4135-b28f-af7f3fa48d68" 00:36:03.113 } 00:36:03.113 07:44:41 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:36:03.113 07:44:41 ftl.ftl_fio_basic -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:36:03.113 07:44:41 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:03.113 07:44:41 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local i 00:36:03.113 07:44:41 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:03.113 07:44:41 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:03.113 07:44:41 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:03.402 07:44:41 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:36:03.659 [ 00:36:03.659 { 00:36:03.659 "name": "ftl0", 00:36:03.659 "aliases": [ 00:36:03.659 "4fb0c9d1-bcbd-4135-b28f-af7f3fa48d68" 00:36:03.659 ], 00:36:03.659 "product_name": "FTL disk", 00:36:03.659 "block_size": 4096, 00:36:03.659 "num_blocks": 20971520, 00:36:03.659 "uuid": "4fb0c9d1-bcbd-4135-b28f-af7f3fa48d68", 00:36:03.659 "assigned_rate_limits": { 00:36:03.659 "rw_ios_per_sec": 0, 00:36:03.659 "rw_mbytes_per_sec": 0, 00:36:03.659 "r_mbytes_per_sec": 0, 00:36:03.659 "w_mbytes_per_sec": 0 00:36:03.659 }, 00:36:03.659 "claimed": false, 00:36:03.659 "zoned": false, 00:36:03.659 "supported_io_types": { 00:36:03.659 "read": true, 00:36:03.659 "write": true, 00:36:03.659 "unmap": true, 00:36:03.659 "flush": true, 00:36:03.659 "reset": false, 00:36:03.659 "nvme_admin": false, 00:36:03.659 "nvme_io": false, 00:36:03.659 "nvme_io_md": false, 00:36:03.659 "write_zeroes": true, 00:36:03.659 "zcopy": false, 00:36:03.659 "get_zone_info": false, 00:36:03.659 "zone_management": false, 00:36:03.659 "zone_append": false, 00:36:03.659 "compare": false, 00:36:03.659 "compare_and_write": false, 00:36:03.659 "abort": false, 00:36:03.659 "seek_hole": false, 00:36:03.659 "seek_data": false, 00:36:03.659 "copy": false, 00:36:03.659 "nvme_iov_md": false 00:36:03.659 }, 00:36:03.659 "driver_specific": { 00:36:03.659 "ftl": { 00:36:03.659 "base_bdev": "ec64e7c3-e822-4a5c-8799-889959a2c36c", 00:36:03.659 "cache": "nvc0n1p0" 00:36:03.659 } 00:36:03.659 } 00:36:03.659 } 00:36:03.659 ] 00:36:03.659 07:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # return 0 00:36:03.659 07:44:42 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:36:03.659 07:44:42 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:36:03.917 07:44:42 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:36:03.917 07:44:42 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:36:04.174 [2024-07-15 07:44:42.660077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:04.174 [2024-07-15 07:44:42.660161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:04.174 [2024-07-15 07:44:42.660193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:36:04.174 [2024-07-15 07:44:42.660212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.174 [2024-07-15 07:44:42.660282] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:04.174 [2024-07-15 07:44:42.664382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:04.174 [2024-07-15 07:44:42.664425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:04.174 [2024-07-15 07:44:42.664443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.074 ms 00:36:04.174 [2024-07-15 07:44:42.664469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.174 [2024-07-15 07:44:42.665009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:04.174 [2024-07-15 07:44:42.665054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:04.174 [2024-07-15 07:44:42.665071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.493 ms 00:36:04.175 [2024-07-15 07:44:42.665086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.175 [2024-07-15 07:44:42.668314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:04.175 [2024-07-15 07:44:42.668355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:36:04.175 [2024-07-15 07:44:42.668372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.194 ms 00:36:04.175 [2024-07-15 07:44:42.668387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.175 [2024-07-15 07:44:42.674909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:04.175 [2024-07-15 07:44:42.674950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:36:04.175 [2024-07-15 07:44:42.674966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.479 ms 00:36:04.175 [2024-07-15 07:44:42.674981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.175 [2024-07-15 07:44:42.709419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:04.175 [2024-07-15 07:44:42.709528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:36:04.175 [2024-07-15 07:44:42.709553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.281 ms 00:36:04.175 [2024-07-15 07:44:42.709570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.175 [2024-07-15 07:44:42.728497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:04.175 [2024-07-15 07:44:42.728574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:36:04.175 [2024-07-15 07:44:42.728599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.857 ms 00:36:04.175 [2024-07-15 07:44:42.728615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.175 [2024-07-15 07:44:42.728918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:04.175 [2024-07-15 07:44:42.728945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:36:04.175 [2024-07-15 07:44:42.728960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:36:04.175 [2024-07-15 07:44:42.728975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.175 [2024-07-15 07:44:42.760022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:04.175 [2024-07-15 07:44:42.760115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:36:04.175 [2024-07-15 07:44:42.760138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.010 ms 00:36:04.175 [2024-07-15 07:44:42.760154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.434 [2024-07-15 07:44:42.792418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:04.434 [2024-07-15 07:44:42.792539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:36:04.434 [2024-07-15 07:44:42.792563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.194 ms 00:36:04.434 [2024-07-15 07:44:42.792579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.434 [2024-07-15 07:44:42.824735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:04.434 [2024-07-15 07:44:42.824841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:36:04.434 [2024-07-15 07:44:42.824864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.071 ms 00:36:04.434 [2024-07-15 07:44:42.824880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.434 [2024-07-15 07:44:42.857734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:04.434 [2024-07-15 07:44:42.857862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:36:04.434 [2024-07-15 07:44:42.857887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.660 ms 00:36:04.434 [2024-07-15 07:44:42.857903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.434 [2024-07-15 07:44:42.858025] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:04.434 [2024-07-15 07:44:42.858068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:36:04.434 [2024-07-15 07:44:42.858085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:36:04.434 [2024-07-15 07:44:42.858101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:04.434 [2024-07-15 07:44:42.858115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:04.434 [2024-07-15 07:44:42.858131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:04.434 [2024-07-15 07:44:42.858144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:04.434 [2024-07-15 07:44:42.858159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:04.434 [2024-07-15 07:44:42.858172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:04.434 [2024-07-15 07:44:42.858193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:04.434 [2024-07-15 07:44:42.858207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:04.434 [2024-07-15 07:44:42.858222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:04.434 [2024-07-15 07:44:42.858236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:04.434 [2024-07-15 07:44:42.858251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:04.434 [2024-07-15 07:44:42.858264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:04.434 [2024-07-15 07:44:42.858282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:04.434 [2024-07-15 07:44:42.858295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:04.434 [2024-07-15 07:44:42.858310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:04.434 [2024-07-15 07:44:42.858323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:04.434 [2024-07-15 07:44:42.858339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.858995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:04.435 [2024-07-15 07:44:42.859678] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:04.435 [2024-07-15 07:44:42.859697] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4fb0c9d1-bcbd-4135-b28f-af7f3fa48d68 00:36:04.435 [2024-07-15 07:44:42.859713] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:36:04.435 [2024-07-15 07:44:42.859733] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:36:04.435 [2024-07-15 07:44:42.859756] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:36:04.435 [2024-07-15 07:44:42.859769] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:36:04.435 [2024-07-15 07:44:42.859783] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:04.435 [2024-07-15 07:44:42.859796] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:04.435 [2024-07-15 07:44:42.859810] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:04.435 [2024-07-15 07:44:42.859821] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:04.435 [2024-07-15 07:44:42.859835] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:04.435 [2024-07-15 07:44:42.859847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:04.435 [2024-07-15 07:44:42.859863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:04.435 [2024-07-15 07:44:42.859876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.853 ms 00:36:04.435 [2024-07-15 07:44:42.859891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.435 [2024-07-15 07:44:42.878349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:04.435 [2024-07-15 07:44:42.878435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:04.435 [2024-07-15 07:44:42.878474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.337 ms 00:36:04.435 [2024-07-15 07:44:42.878494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.435 [2024-07-15 07:44:42.879058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:04.435 [2024-07-15 07:44:42.879099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:04.435 [2024-07-15 07:44:42.879115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.504 ms 00:36:04.435 [2024-07-15 07:44:42.879130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.435 [2024-07-15 07:44:42.941662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:04.435 [2024-07-15 07:44:42.941767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:04.435 [2024-07-15 07:44:42.941789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:04.435 [2024-07-15 07:44:42.941805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.435 [2024-07-15 07:44:42.941932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:04.435 [2024-07-15 07:44:42.941954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:04.435 [2024-07-15 07:44:42.941969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:04.435 [2024-07-15 07:44:42.941984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.435 [2024-07-15 07:44:42.942161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:04.435 [2024-07-15 07:44:42.942193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:04.435 [2024-07-15 07:44:42.942219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:04.435 [2024-07-15 07:44:42.942234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.435 [2024-07-15 07:44:42.942283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:04.435 [2024-07-15 07:44:42.942306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:04.435 [2024-07-15 07:44:42.942320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:04.435 [2024-07-15 07:44:42.942335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.693 [2024-07-15 07:44:43.065198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:04.693 [2024-07-15 07:44:43.065291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:04.693 [2024-07-15 07:44:43.065313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:04.693 [2024-07-15 07:44:43.065329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.693 [2024-07-15 07:44:43.158630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:04.693 [2024-07-15 07:44:43.158726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:04.693 [2024-07-15 07:44:43.158750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:04.693 [2024-07-15 07:44:43.158766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.693 [2024-07-15 07:44:43.158939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:04.693 [2024-07-15 07:44:43.158965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:04.694 [2024-07-15 07:44:43.158985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:04.694 [2024-07-15 07:44:43.158999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.694 [2024-07-15 07:44:43.159087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:04.694 [2024-07-15 07:44:43.159114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:04.694 [2024-07-15 07:44:43.159129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:04.694 [2024-07-15 07:44:43.159144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.694 [2024-07-15 07:44:43.159305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:04.694 [2024-07-15 07:44:43.159331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:04.694 [2024-07-15 07:44:43.159349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:04.694 [2024-07-15 07:44:43.159372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.694 [2024-07-15 07:44:43.159445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:04.694 [2024-07-15 07:44:43.159494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:04.694 [2024-07-15 07:44:43.159509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:04.694 [2024-07-15 07:44:43.159524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.694 [2024-07-15 07:44:43.159601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:04.694 [2024-07-15 07:44:43.159622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:04.694 [2024-07-15 07:44:43.159635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:04.694 [2024-07-15 07:44:43.159653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.694 [2024-07-15 07:44:43.159730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:04.694 [2024-07-15 07:44:43.159770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:04.694 [2024-07-15 07:44:43.159785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:04.694 [2024-07-15 07:44:43.159800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.694 [2024-07-15 07:44:43.160031] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 499.925 ms, result 0 00:36:04.694 true 00:36:04.694 07:44:43 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 79675 00:36:04.694 07:44:43 ftl.ftl_fio_basic -- common/autotest_common.sh@948 -- # '[' -z 79675 ']' 00:36:04.694 07:44:43 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # kill -0 79675 00:36:04.694 07:44:43 ftl.ftl_fio_basic -- common/autotest_common.sh@953 -- # uname 00:36:04.694 07:44:43 ftl.ftl_fio_basic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:04.694 07:44:43 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79675 00:36:04.694 killing process with pid 79675 00:36:04.694 07:44:43 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:04.694 07:44:43 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:04.694 07:44:43 ftl.ftl_fio_basic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79675' 00:36:04.694 07:44:43 ftl.ftl_fio_basic -- common/autotest_common.sh@967 -- # kill 79675 00:36:04.694 07:44:43 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # wait 79675 00:36:09.951 07:44:48 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:09.951 07:44:48 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:36:09.951 07:44:48 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:36:09.951 07:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:09.951 07:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:36:09.951 07:44:48 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:36:09.951 07:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:36:09.951 07:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:09.951 07:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:09.951 07:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:09.951 07:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:09.951 07:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:36:09.951 07:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:09.951 07:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:09.951 07:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:09.951 07:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:36:09.951 07:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:09.951 07:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:09.951 07:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:09.951 07:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:36:09.951 07:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:09.951 07:44:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:36:09.951 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:36:09.951 fio-3.35 00:36:09.951 Starting 1 thread 00:36:16.557 00:36:16.557 test: (groupid=0, jobs=1): err= 0: pid=79905: Mon Jul 15 07:44:54 2024 00:36:16.557 read: IOPS=865, BW=57.5MiB/s (60.3MB/s)(255MiB/4428msec) 00:36:16.557 slat (nsec): min=5567, max=45742, avg=7767.35, stdev=3441.05 00:36:16.557 clat (usec): min=354, max=1639, avg=515.01, stdev=60.46 00:36:16.557 lat (usec): min=363, max=1646, avg=522.78, stdev=61.02 00:36:16.557 clat percentiles (usec): 00:36:16.557 | 1.00th=[ 388], 5.00th=[ 441], 10.00th=[ 453], 20.00th=[ 461], 00:36:16.557 | 30.00th=[ 474], 40.00th=[ 502], 50.00th=[ 523], 60.00th=[ 529], 00:36:16.557 | 70.00th=[ 537], 80.00th=[ 553], 90.00th=[ 594], 95.00th=[ 611], 00:36:16.557 | 99.00th=[ 668], 99.50th=[ 701], 99.90th=[ 889], 99.95th=[ 1237], 00:36:16.557 | 99.99th=[ 1647] 00:36:16.557 write: IOPS=871, BW=57.9MiB/s (60.7MB/s)(256MiB/4423msec); 0 zone resets 00:36:16.557 slat (usec): min=19, max=108, avg=29.11, stdev= 7.36 00:36:16.557 clat (usec): min=376, max=1883, avg=581.97, stdev=70.52 00:36:16.557 lat (usec): min=414, max=1920, avg=611.08, stdev=71.06 00:36:16.557 clat percentiles (usec): 00:36:16.557 | 1.00th=[ 469], 5.00th=[ 486], 10.00th=[ 506], 20.00th=[ 545], 00:36:16.557 | 30.00th=[ 553], 40.00th=[ 562], 50.00th=[ 570], 60.00th=[ 578], 00:36:16.557 | 70.00th=[ 611], 80.00th=[ 627], 90.00th=[ 652], 95.00th=[ 685], 00:36:16.557 | 99.00th=[ 832], 99.50th=[ 881], 99.90th=[ 1090], 99.95th=[ 1680], 00:36:16.557 | 99.99th=[ 1876] 00:36:16.557 bw ( KiB/s): min=57256, max=61472, per=100.00%, avg=59331.75, stdev=1296.06, samples=8 00:36:16.557 iops : min= 842, max= 904, avg=872.50, stdev=19.03, samples=8 00:36:16.557 lat (usec) : 500=24.37%, 750=74.57%, 1000=0.94% 00:36:16.557 lat (msec) : 2=0.12% 00:36:16.557 cpu : usr=99.21%, sys=0.11%, ctx=6, majf=0, minf=1172 00:36:16.557 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:16.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.557 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.557 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:16.557 00:36:16.557 Run status group 0 (all jobs): 00:36:16.557 READ: bw=57.5MiB/s (60.3MB/s), 57.5MiB/s-57.5MiB/s (60.3MB/s-60.3MB/s), io=255MiB (267MB), run=4428-4428msec 00:36:16.557 WRITE: bw=57.9MiB/s (60.7MB/s), 57.9MiB/s-57.9MiB/s (60.7MB/s-60.7MB/s), io=256MiB (269MB), run=4423-4423msec 00:36:17.932 ----------------------------------------------------- 00:36:17.932 Suppressions used: 00:36:17.932 count bytes template 00:36:17.932 1 5 /usr/src/fio/parse.c 00:36:17.932 1 8 libtcmalloc_minimal.so 00:36:17.932 1 904 libcrypto.so 00:36:17.932 ----------------------------------------------------- 00:36:17.932 00:36:17.932 07:44:56 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:36:17.932 07:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:17.932 07:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:36:17.932 07:44:56 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:36:17.932 07:44:56 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:36:17.932 07:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:17.932 07:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:36:17.932 07:44:56 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:36:17.932 07:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:36:17.932 07:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:17.932 07:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:17.932 07:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:17.932 07:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:17.932 07:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:36:17.932 07:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:17.932 07:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:17.932 07:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:17.932 07:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:36:17.932 07:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:17.932 07:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:17.932 07:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:17.932 07:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:36:17.932 07:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:17.932 07:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:36:17.932 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:36:17.932 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:36:17.932 fio-3.35 00:36:17.932 Starting 2 threads 00:36:49.993 00:36:49.993 first_half: (groupid=0, jobs=1): err= 0: pid=80014: Mon Jul 15 07:45:27 2024 00:36:49.993 read: IOPS=2220, BW=8883KiB/s (9096kB/s)(256MiB/29483msec) 00:36:49.993 slat (nsec): min=4918, max=78836, avg=8907.02, stdev=2971.60 00:36:49.993 clat (usec): min=888, max=327682, avg=48718.99, stdev=31791.16 00:36:49.993 lat (usec): min=894, max=327691, avg=48727.90, stdev=31791.39 00:36:49.993 clat percentiles (msec): 00:36:49.993 | 1.00th=[ 13], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 40], 00:36:49.993 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 41], 60.00th=[ 42], 00:36:49.993 | 70.00th=[ 45], 80.00th=[ 48], 90.00th=[ 54], 95.00th=[ 97], 00:36:49.993 | 99.00th=[ 215], 99.50th=[ 234], 99.90th=[ 279], 99.95th=[ 292], 00:36:49.993 | 99.99th=[ 317] 00:36:49.993 write: IOPS=2226, BW=8906KiB/s (9120kB/s)(256MiB/29434msec); 0 zone resets 00:36:49.993 slat (usec): min=6, max=523, avg=10.22, stdev= 5.89 00:36:49.993 clat (usec): min=534, max=59382, avg=8861.85, stdev=8697.76 00:36:49.993 lat (usec): min=544, max=59389, avg=8872.07, stdev=8697.87 00:36:49.993 clat percentiles (usec): 00:36:49.993 | 1.00th=[ 1074], 5.00th=[ 1500], 10.00th=[ 1942], 20.00th=[ 3687], 00:36:49.993 | 30.00th=[ 4686], 40.00th=[ 5800], 50.00th=[ 6718], 60.00th=[ 7701], 00:36:49.993 | 70.00th=[ 8979], 80.00th=[10814], 90.00th=[16712], 95.00th=[25035], 00:36:49.993 | 99.00th=[45876], 99.50th=[52167], 99.90th=[57410], 99.95th=[57934], 00:36:49.993 | 99.99th=[58459] 00:36:49.993 bw ( KiB/s): min= 2352, max=42032, per=100.00%, avg=22633.83, stdev=11641.05, samples=23 00:36:49.993 iops : min= 588, max=10508, avg=5658.43, stdev=2910.25, samples=23 00:36:49.993 lat (usec) : 750=0.05%, 1000=0.27% 00:36:49.993 lat (msec) : 2=4.96%, 4=5.96%, 10=27.22%, 20=9.93%, 50=44.57% 00:36:49.993 lat (msec) : 100=4.60%, 250=2.33%, 500=0.13% 00:36:49.993 cpu : usr=99.08%, sys=0.20%, ctx=47, majf=0, minf=5534 00:36:49.993 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:36:49.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:49.993 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:49.993 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:49.993 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:49.993 second_half: (groupid=0, jobs=1): err= 0: pid=80015: Mon Jul 15 07:45:27 2024 00:36:49.993 read: IOPS=2240, BW=8962KiB/s (9177kB/s)(256MiB/29229msec) 00:36:49.993 slat (nsec): min=5085, max=93344, avg=8464.99, stdev=2623.49 00:36:49.993 clat (msec): min=12, max=286, avg=49.10, stdev=28.61 00:36:49.993 lat (msec): min=12, max=286, avg=49.11, stdev=28.61 00:36:49.993 clat percentiles (msec): 00:36:49.993 | 1.00th=[ 38], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 40], 00:36:49.993 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 41], 60.00th=[ 43], 00:36:49.993 | 70.00th=[ 46], 80.00th=[ 48], 90.00th=[ 56], 95.00th=[ 89], 00:36:49.993 | 99.00th=[ 207], 99.50th=[ 222], 99.90th=[ 253], 99.95th=[ 257], 00:36:49.993 | 99.99th=[ 279] 00:36:49.993 write: IOPS=2408, BW=9632KiB/s (9864kB/s)(256MiB/27215msec); 0 zone resets 00:36:49.993 slat (usec): min=5, max=838, avg= 9.66, stdev= 6.90 00:36:49.993 clat (usec): min=532, max=53074, avg=7993.25, stdev=4990.65 00:36:49.993 lat (usec): min=572, max=53083, avg=8002.91, stdev=4990.84 00:36:49.993 clat percentiles (usec): 00:36:49.993 | 1.00th=[ 1369], 5.00th=[ 2278], 10.00th=[ 3326], 20.00th=[ 4359], 00:36:49.993 | 30.00th=[ 5407], 40.00th=[ 6259], 50.00th=[ 6980], 60.00th=[ 7701], 00:36:49.993 | 70.00th=[ 8586], 80.00th=[10290], 90.00th=[14877], 95.00th=[17171], 00:36:49.993 | 99.00th=[27132], 99.50th=[32900], 99.90th=[43254], 99.95th=[44303], 00:36:49.993 | 99.99th=[51119] 00:36:49.993 bw ( KiB/s): min= 8112, max=39400, per=100.00%, avg=22795.13, stdev=9215.13, samples=23 00:36:49.994 iops : min= 2028, max= 9850, avg=5698.78, stdev=2303.78, samples=23 00:36:49.994 lat (usec) : 750=0.03%, 1000=0.12% 00:36:49.994 lat (msec) : 2=1.71%, 4=6.00%, 10=31.69%, 20=9.62%, 50=43.50% 00:36:49.994 lat (msec) : 100=5.05%, 250=2.22%, 500=0.06% 00:36:49.994 cpu : usr=99.13%, sys=0.15%, ctx=89, majf=0, minf=5585 00:36:49.994 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:36:49.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:49.994 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:49.994 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:49.994 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:49.994 00:36:49.994 Run status group 0 (all jobs): 00:36:49.994 READ: bw=17.4MiB/s (18.2MB/s), 8883KiB/s-8962KiB/s (9096kB/s-9177kB/s), io=512MiB (536MB), run=29229-29483msec 00:36:49.994 WRITE: bw=17.4MiB/s (18.2MB/s), 8906KiB/s-9632KiB/s (9120kB/s-9864kB/s), io=512MiB (537MB), run=27215-29434msec 00:36:51.896 ----------------------------------------------------- 00:36:51.896 Suppressions used: 00:36:51.896 count bytes template 00:36:51.896 2 10 /usr/src/fio/parse.c 00:36:51.896 3 288 /usr/src/fio/iolog.c 00:36:51.896 1 8 libtcmalloc_minimal.so 00:36:51.896 1 904 libcrypto.so 00:36:51.896 ----------------------------------------------------- 00:36:51.896 00:36:51.896 07:45:30 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:36:51.896 07:45:30 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:51.896 07:45:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:36:51.896 07:45:30 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:36:51.896 07:45:30 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:36:51.896 07:45:30 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:51.896 07:45:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:36:51.896 07:45:30 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:36:51.897 07:45:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:36:51.897 07:45:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:51.897 07:45:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:51.897 07:45:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:51.897 07:45:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:51.897 07:45:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:36:51.897 07:45:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:51.897 07:45:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:51.897 07:45:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:51.897 07:45:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:36:51.897 07:45:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:51.897 07:45:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:51.897 07:45:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:51.897 07:45:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:36:51.897 07:45:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:51.897 07:45:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:36:51.897 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:36:51.897 fio-3.35 00:36:51.897 Starting 1 thread 00:37:13.867 00:37:13.867 test: (groupid=0, jobs=1): err= 0: pid=80383: Mon Jul 15 07:45:48 2024 00:37:13.867 read: IOPS=6151, BW=24.0MiB/s (25.2MB/s)(255MiB/10600msec) 00:37:13.867 slat (nsec): min=4883, max=57119, avg=7729.63, stdev=2555.18 00:37:13.867 clat (usec): min=909, max=47027, avg=20796.57, stdev=1657.74 00:37:13.867 lat (usec): min=914, max=47038, avg=20804.30, stdev=1658.06 00:37:13.867 clat percentiles (usec): 00:37:13.867 | 1.00th=[19006], 5.00th=[19268], 10.00th=[19530], 20.00th=[19792], 00:37:13.867 | 30.00th=[20055], 40.00th=[20055], 50.00th=[20317], 60.00th=[20579], 00:37:13.867 | 70.00th=[21103], 80.00th=[21890], 90.00th=[22414], 95.00th=[23200], 00:37:13.867 | 99.00th=[26084], 99.50th=[27919], 99.90th=[35390], 99.95th=[41681], 00:37:13.867 | 99.99th=[46400] 00:37:13.867 write: IOPS=10.6k, BW=41.2MiB/s (43.3MB/s)(256MiB/6206msec); 0 zone resets 00:37:13.867 slat (usec): min=6, max=267, avg=10.72, stdev= 5.54 00:37:13.867 clat (usec): min=646, max=74424, avg=12054.62, stdev=15154.48 00:37:13.867 lat (usec): min=657, max=74434, avg=12065.34, stdev=15154.48 00:37:13.867 clat percentiles (usec): 00:37:13.867 | 1.00th=[ 955], 5.00th=[ 1205], 10.00th=[ 1352], 20.00th=[ 1582], 00:37:13.867 | 30.00th=[ 1844], 40.00th=[ 2442], 50.00th=[ 7832], 60.00th=[ 9110], 00:37:13.867 | 70.00th=[10683], 80.00th=[13173], 90.00th=[43254], 95.00th=[46924], 00:37:13.867 | 99.00th=[55837], 99.50th=[57410], 99.90th=[60031], 99.95th=[61080], 00:37:13.867 | 99.99th=[70779] 00:37:13.867 bw ( KiB/s): min=13800, max=60608, per=95.47%, avg=40329.85, stdev=11224.18, samples=13 00:37:13.867 iops : min= 3450, max=15152, avg=10082.46, stdev=2806.05, samples=13 00:37:13.867 lat (usec) : 750=0.02%, 1000=0.71% 00:37:13.867 lat (msec) : 2=16.52%, 4=3.66%, 10=12.32%, 20=25.17%, 50=39.98% 00:37:13.867 lat (msec) : 100=1.63% 00:37:13.867 cpu : usr=98.82%, sys=0.30%, ctx=134, majf=0, minf=5568 00:37:13.867 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:37:13.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:13.867 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:13.867 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:13.867 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:13.867 00:37:13.867 Run status group 0 (all jobs): 00:37:13.867 READ: bw=24.0MiB/s (25.2MB/s), 24.0MiB/s-24.0MiB/s (25.2MB/s-25.2MB/s), io=255MiB (267MB), run=10600-10600msec 00:37:13.867 WRITE: bw=41.2MiB/s (43.3MB/s), 41.2MiB/s-41.2MiB/s (43.3MB/s-43.3MB/s), io=256MiB (268MB), run=6206-6206msec 00:37:13.867 ----------------------------------------------------- 00:37:13.867 Suppressions used: 00:37:13.867 count bytes template 00:37:13.867 1 5 /usr/src/fio/parse.c 00:37:13.867 2 192 /usr/src/fio/iolog.c 00:37:13.867 1 8 libtcmalloc_minimal.so 00:37:13.867 1 904 libcrypto.so 00:37:13.867 ----------------------------------------------------- 00:37:13.867 00:37:13.867 07:45:50 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:37:13.867 07:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:13.867 07:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:37:13.867 07:45:50 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:37:13.867 Remove shared memory files 00:37:13.867 07:45:50 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:37:13.867 07:45:50 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:37:13.867 07:45:50 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:37:13.867 07:45:50 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:37:13.867 07:45:50 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid62085 /dev/shm/spdk_tgt_trace.pid78600 00:37:13.867 07:45:50 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:37:13.867 07:45:50 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:37:13.867 ************************************ 00:37:13.867 END TEST ftl_fio_basic 00:37:13.867 ************************************ 00:37:13.867 00:37:13.867 real 1m18.979s 00:37:13.867 user 2m54.209s 00:37:13.867 sys 0m4.861s 00:37:13.867 07:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:13.867 07:45:50 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:37:13.867 07:45:50 ftl -- common/autotest_common.sh@1142 -- # return 0 00:37:13.867 07:45:50 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:37:13.867 07:45:50 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:37:13.867 07:45:50 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:13.867 07:45:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:37:13.867 ************************************ 00:37:13.867 START TEST ftl_bdevperf 00:37:13.867 ************************************ 00:37:13.867 07:45:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:37:13.867 * Looking for test storage... 00:37:13.867 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=80644 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 80644 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 80644 ']' 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:13.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:13.868 07:45:50 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:13.868 [2024-07-15 07:45:51.043769] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:37:13.868 [2024-07-15 07:45:51.043959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80644 ] 00:37:13.868 [2024-07-15 07:45:51.213302] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:13.868 [2024-07-15 07:45:51.486108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:13.868 07:45:52 ftl.ftl_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:13.868 07:45:52 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:37:13.868 07:45:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:37:13.868 07:45:52 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:37:13.868 07:45:52 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:37:13.868 07:45:52 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:37:13.868 07:45:52 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:37:13.868 07:45:52 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:37:13.868 07:45:52 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:37:13.868 07:45:52 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:37:13.868 07:45:52 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:37:13.868 07:45:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:37:13.868 07:45:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:37:13.868 07:45:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:37:13.868 07:45:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:37:13.868 07:45:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:37:14.126 07:45:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:37:14.126 { 00:37:14.126 "name": "nvme0n1", 00:37:14.126 "aliases": [ 00:37:14.126 "75dfa81d-2236-4cba-82fa-f0ad7abd925d" 00:37:14.126 ], 00:37:14.126 "product_name": "NVMe disk", 00:37:14.126 "block_size": 4096, 00:37:14.126 "num_blocks": 1310720, 00:37:14.126 "uuid": "75dfa81d-2236-4cba-82fa-f0ad7abd925d", 00:37:14.126 "assigned_rate_limits": { 00:37:14.126 "rw_ios_per_sec": 0, 00:37:14.126 "rw_mbytes_per_sec": 0, 00:37:14.126 "r_mbytes_per_sec": 0, 00:37:14.126 "w_mbytes_per_sec": 0 00:37:14.126 }, 00:37:14.126 "claimed": true, 00:37:14.126 "claim_type": "read_many_write_one", 00:37:14.126 "zoned": false, 00:37:14.126 "supported_io_types": { 00:37:14.126 "read": true, 00:37:14.126 "write": true, 00:37:14.126 "unmap": true, 00:37:14.126 "flush": true, 00:37:14.127 "reset": true, 00:37:14.127 "nvme_admin": true, 00:37:14.127 "nvme_io": true, 00:37:14.127 "nvme_io_md": false, 00:37:14.127 "write_zeroes": true, 00:37:14.127 "zcopy": false, 00:37:14.127 "get_zone_info": false, 00:37:14.127 "zone_management": false, 00:37:14.127 "zone_append": false, 00:37:14.127 "compare": true, 00:37:14.127 "compare_and_write": false, 00:37:14.127 "abort": true, 00:37:14.127 "seek_hole": false, 00:37:14.127 "seek_data": false, 00:37:14.127 "copy": true, 00:37:14.127 "nvme_iov_md": false 00:37:14.127 }, 00:37:14.127 "driver_specific": { 00:37:14.127 "nvme": [ 00:37:14.127 { 00:37:14.127 "pci_address": "0000:00:11.0", 00:37:14.127 "trid": { 00:37:14.127 "trtype": "PCIe", 00:37:14.127 "traddr": "0000:00:11.0" 00:37:14.127 }, 00:37:14.127 "ctrlr_data": { 00:37:14.127 "cntlid": 0, 00:37:14.127 "vendor_id": "0x1b36", 00:37:14.127 "model_number": "QEMU NVMe Ctrl", 00:37:14.127 "serial_number": "12341", 00:37:14.127 "firmware_revision": "8.0.0", 00:37:14.127 "subnqn": "nqn.2019-08.org.qemu:12341", 00:37:14.127 "oacs": { 00:37:14.127 "security": 0, 00:37:14.127 "format": 1, 00:37:14.127 "firmware": 0, 00:37:14.127 "ns_manage": 1 00:37:14.127 }, 00:37:14.127 "multi_ctrlr": false, 00:37:14.127 "ana_reporting": false 00:37:14.127 }, 00:37:14.127 "vs": { 00:37:14.127 "nvme_version": "1.4" 00:37:14.127 }, 00:37:14.127 "ns_data": { 00:37:14.127 "id": 1, 00:37:14.127 "can_share": false 00:37:14.127 } 00:37:14.127 } 00:37:14.127 ], 00:37:14.127 "mp_policy": "active_passive" 00:37:14.127 } 00:37:14.127 } 00:37:14.127 ]' 00:37:14.127 07:45:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:37:14.385 07:45:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:37:14.385 07:45:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:37:14.385 07:45:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:37:14.385 07:45:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:37:14.385 07:45:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:37:14.385 07:45:52 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:37:14.385 07:45:52 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:37:14.385 07:45:52 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:37:14.385 07:45:52 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:37:14.385 07:45:52 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:37:14.642 07:45:53 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=02f64569-b618-4658-9a92-f5bd1c416ce5 00:37:14.642 07:45:53 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:37:14.642 07:45:53 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 02f64569-b618-4658-9a92-f5bd1c416ce5 00:37:14.899 07:45:53 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:37:15.155 07:45:53 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=1c186d78-cc6a-400b-9142-c1ebdc0a76a0 00:37:15.155 07:45:53 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 1c186d78-cc6a-400b-9142-c1ebdc0a76a0 00:37:15.413 07:45:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=c1ff2b98-4edc-4354-9a8d-22dec3530a53 00:37:15.413 07:45:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c1ff2b98-4edc-4354-9a8d-22dec3530a53 00:37:15.413 07:45:53 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:37:15.413 07:45:53 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:37:15.413 07:45:53 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=c1ff2b98-4edc-4354-9a8d-22dec3530a53 00:37:15.413 07:45:53 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:37:15.413 07:45:53 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size c1ff2b98-4edc-4354-9a8d-22dec3530a53 00:37:15.413 07:45:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=c1ff2b98-4edc-4354-9a8d-22dec3530a53 00:37:15.413 07:45:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:37:15.413 07:45:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:37:15.413 07:45:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:37:15.413 07:45:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c1ff2b98-4edc-4354-9a8d-22dec3530a53 00:37:15.671 07:45:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:37:15.671 { 00:37:15.671 "name": "c1ff2b98-4edc-4354-9a8d-22dec3530a53", 00:37:15.671 "aliases": [ 00:37:15.671 "lvs/nvme0n1p0" 00:37:15.671 ], 00:37:15.671 "product_name": "Logical Volume", 00:37:15.671 "block_size": 4096, 00:37:15.671 "num_blocks": 26476544, 00:37:15.671 "uuid": "c1ff2b98-4edc-4354-9a8d-22dec3530a53", 00:37:15.671 "assigned_rate_limits": { 00:37:15.671 "rw_ios_per_sec": 0, 00:37:15.671 "rw_mbytes_per_sec": 0, 00:37:15.671 "r_mbytes_per_sec": 0, 00:37:15.671 "w_mbytes_per_sec": 0 00:37:15.671 }, 00:37:15.671 "claimed": false, 00:37:15.671 "zoned": false, 00:37:15.671 "supported_io_types": { 00:37:15.671 "read": true, 00:37:15.671 "write": true, 00:37:15.671 "unmap": true, 00:37:15.671 "flush": false, 00:37:15.671 "reset": true, 00:37:15.671 "nvme_admin": false, 00:37:15.671 "nvme_io": false, 00:37:15.671 "nvme_io_md": false, 00:37:15.671 "write_zeroes": true, 00:37:15.671 "zcopy": false, 00:37:15.671 "get_zone_info": false, 00:37:15.671 "zone_management": false, 00:37:15.671 "zone_append": false, 00:37:15.671 "compare": false, 00:37:15.671 "compare_and_write": false, 00:37:15.671 "abort": false, 00:37:15.671 "seek_hole": true, 00:37:15.671 "seek_data": true, 00:37:15.671 "copy": false, 00:37:15.671 "nvme_iov_md": false 00:37:15.671 }, 00:37:15.671 "driver_specific": { 00:37:15.671 "lvol": { 00:37:15.671 "lvol_store_uuid": "1c186d78-cc6a-400b-9142-c1ebdc0a76a0", 00:37:15.671 "base_bdev": "nvme0n1", 00:37:15.671 "thin_provision": true, 00:37:15.671 "num_allocated_clusters": 0, 00:37:15.671 "snapshot": false, 00:37:15.671 "clone": false, 00:37:15.671 "esnap_clone": false 00:37:15.671 } 00:37:15.671 } 00:37:15.671 } 00:37:15.671 ]' 00:37:15.671 07:45:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:37:15.671 07:45:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:37:15.671 07:45:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:37:15.671 07:45:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:37:15.671 07:45:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:37:15.671 07:45:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:37:15.671 07:45:54 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:37:15.671 07:45:54 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:37:15.929 07:45:54 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:37:16.188 07:45:54 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:37:16.188 07:45:54 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:37:16.188 07:45:54 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size c1ff2b98-4edc-4354-9a8d-22dec3530a53 00:37:16.188 07:45:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=c1ff2b98-4edc-4354-9a8d-22dec3530a53 00:37:16.188 07:45:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:37:16.188 07:45:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:37:16.188 07:45:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:37:16.188 07:45:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c1ff2b98-4edc-4354-9a8d-22dec3530a53 00:37:16.446 07:45:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:37:16.446 { 00:37:16.446 "name": "c1ff2b98-4edc-4354-9a8d-22dec3530a53", 00:37:16.446 "aliases": [ 00:37:16.446 "lvs/nvme0n1p0" 00:37:16.446 ], 00:37:16.446 "product_name": "Logical Volume", 00:37:16.446 "block_size": 4096, 00:37:16.446 "num_blocks": 26476544, 00:37:16.446 "uuid": "c1ff2b98-4edc-4354-9a8d-22dec3530a53", 00:37:16.446 "assigned_rate_limits": { 00:37:16.446 "rw_ios_per_sec": 0, 00:37:16.446 "rw_mbytes_per_sec": 0, 00:37:16.446 "r_mbytes_per_sec": 0, 00:37:16.446 "w_mbytes_per_sec": 0 00:37:16.446 }, 00:37:16.447 "claimed": false, 00:37:16.447 "zoned": false, 00:37:16.447 "supported_io_types": { 00:37:16.447 "read": true, 00:37:16.447 "write": true, 00:37:16.447 "unmap": true, 00:37:16.447 "flush": false, 00:37:16.447 "reset": true, 00:37:16.447 "nvme_admin": false, 00:37:16.447 "nvme_io": false, 00:37:16.447 "nvme_io_md": false, 00:37:16.447 "write_zeroes": true, 00:37:16.447 "zcopy": false, 00:37:16.447 "get_zone_info": false, 00:37:16.447 "zone_management": false, 00:37:16.447 "zone_append": false, 00:37:16.447 "compare": false, 00:37:16.447 "compare_and_write": false, 00:37:16.447 "abort": false, 00:37:16.447 "seek_hole": true, 00:37:16.447 "seek_data": true, 00:37:16.447 "copy": false, 00:37:16.447 "nvme_iov_md": false 00:37:16.447 }, 00:37:16.447 "driver_specific": { 00:37:16.447 "lvol": { 00:37:16.447 "lvol_store_uuid": "1c186d78-cc6a-400b-9142-c1ebdc0a76a0", 00:37:16.447 "base_bdev": "nvme0n1", 00:37:16.447 "thin_provision": true, 00:37:16.447 "num_allocated_clusters": 0, 00:37:16.447 "snapshot": false, 00:37:16.447 "clone": false, 00:37:16.447 "esnap_clone": false 00:37:16.447 } 00:37:16.447 } 00:37:16.447 } 00:37:16.447 ]' 00:37:16.447 07:45:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:37:16.447 07:45:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:37:16.447 07:45:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:37:16.447 07:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:37:16.447 07:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:37:16.447 07:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:37:16.447 07:45:55 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:37:16.447 07:45:55 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:37:17.015 07:45:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:37:17.015 07:45:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size c1ff2b98-4edc-4354-9a8d-22dec3530a53 00:37:17.015 07:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=c1ff2b98-4edc-4354-9a8d-22dec3530a53 00:37:17.015 07:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:37:17.015 07:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:37:17.015 07:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:37:17.015 07:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c1ff2b98-4edc-4354-9a8d-22dec3530a53 00:37:17.015 07:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:37:17.015 { 00:37:17.015 "name": "c1ff2b98-4edc-4354-9a8d-22dec3530a53", 00:37:17.015 "aliases": [ 00:37:17.015 "lvs/nvme0n1p0" 00:37:17.015 ], 00:37:17.015 "product_name": "Logical Volume", 00:37:17.015 "block_size": 4096, 00:37:17.015 "num_blocks": 26476544, 00:37:17.015 "uuid": "c1ff2b98-4edc-4354-9a8d-22dec3530a53", 00:37:17.015 "assigned_rate_limits": { 00:37:17.015 "rw_ios_per_sec": 0, 00:37:17.015 "rw_mbytes_per_sec": 0, 00:37:17.015 "r_mbytes_per_sec": 0, 00:37:17.015 "w_mbytes_per_sec": 0 00:37:17.015 }, 00:37:17.015 "claimed": false, 00:37:17.015 "zoned": false, 00:37:17.015 "supported_io_types": { 00:37:17.015 "read": true, 00:37:17.015 "write": true, 00:37:17.015 "unmap": true, 00:37:17.015 "flush": false, 00:37:17.015 "reset": true, 00:37:17.015 "nvme_admin": false, 00:37:17.015 "nvme_io": false, 00:37:17.015 "nvme_io_md": false, 00:37:17.015 "write_zeroes": true, 00:37:17.015 "zcopy": false, 00:37:17.015 "get_zone_info": false, 00:37:17.015 "zone_management": false, 00:37:17.015 "zone_append": false, 00:37:17.015 "compare": false, 00:37:17.015 "compare_and_write": false, 00:37:17.015 "abort": false, 00:37:17.015 "seek_hole": true, 00:37:17.015 "seek_data": true, 00:37:17.015 "copy": false, 00:37:17.015 "nvme_iov_md": false 00:37:17.015 }, 00:37:17.015 "driver_specific": { 00:37:17.015 "lvol": { 00:37:17.015 "lvol_store_uuid": "1c186d78-cc6a-400b-9142-c1ebdc0a76a0", 00:37:17.015 "base_bdev": "nvme0n1", 00:37:17.015 "thin_provision": true, 00:37:17.015 "num_allocated_clusters": 0, 00:37:17.015 "snapshot": false, 00:37:17.015 "clone": false, 00:37:17.015 "esnap_clone": false 00:37:17.015 } 00:37:17.015 } 00:37:17.015 } 00:37:17.015 ]' 00:37:17.015 07:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:37:17.015 07:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:37:17.015 07:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:37:17.275 07:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:37:17.275 07:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:37:17.275 07:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:37:17.275 07:45:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:37:17.275 07:45:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c1ff2b98-4edc-4354-9a8d-22dec3530a53 -c nvc0n1p0 --l2p_dram_limit 20 00:37:17.534 [2024-07-15 07:45:55.888540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.534 [2024-07-15 07:45:55.888620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:37:17.534 [2024-07-15 07:45:55.888649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:37:17.534 [2024-07-15 07:45:55.888664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.534 [2024-07-15 07:45:55.888758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.534 [2024-07-15 07:45:55.888777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:37:17.534 [2024-07-15 07:45:55.888795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:37:17.534 [2024-07-15 07:45:55.888811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.534 [2024-07-15 07:45:55.888852] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:37:17.534 [2024-07-15 07:45:55.889932] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:37:17.534 [2024-07-15 07:45:55.889977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.534 [2024-07-15 07:45:55.890003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:37:17.534 [2024-07-15 07:45:55.890024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.136 ms 00:37:17.534 [2024-07-15 07:45:55.890037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.534 [2024-07-15 07:45:55.890241] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a7e95a3e-c0b4-4d03-a37c-3f49ed06fb69 00:37:17.534 [2024-07-15 07:45:55.892755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.534 [2024-07-15 07:45:55.892806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:37:17.534 [2024-07-15 07:45:55.892824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:37:17.534 [2024-07-15 07:45:55.892845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.534 [2024-07-15 07:45:55.910034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.534 [2024-07-15 07:45:55.910158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:37:17.534 [2024-07-15 07:45:55.910199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.088 ms 00:37:17.534 [2024-07-15 07:45:55.910230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.534 [2024-07-15 07:45:55.910539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.534 [2024-07-15 07:45:55.910594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:37:17.534 [2024-07-15 07:45:55.910653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:37:17.534 [2024-07-15 07:45:55.910694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.534 [2024-07-15 07:45:55.910861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.534 [2024-07-15 07:45:55.910894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:37:17.534 [2024-07-15 07:45:55.910910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:37:17.534 [2024-07-15 07:45:55.910928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.534 [2024-07-15 07:45:55.910994] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:37:17.534 [2024-07-15 07:45:55.917713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.534 [2024-07-15 07:45:55.917795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:37:17.534 [2024-07-15 07:45:55.917823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.733 ms 00:37:17.534 [2024-07-15 07:45:55.917837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.534 [2024-07-15 07:45:55.917923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.534 [2024-07-15 07:45:55.917945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:37:17.534 [2024-07-15 07:45:55.917962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:37:17.534 [2024-07-15 07:45:55.917974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.534 [2024-07-15 07:45:55.918042] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:37:17.534 [2024-07-15 07:45:55.918223] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:37:17.534 [2024-07-15 07:45:55.918266] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:37:17.534 [2024-07-15 07:45:55.918286] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:37:17.534 [2024-07-15 07:45:55.918305] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:37:17.534 [2024-07-15 07:45:55.918320] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:37:17.534 [2024-07-15 07:45:55.918337] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:37:17.534 [2024-07-15 07:45:55.918349] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:37:17.534 [2024-07-15 07:45:55.918366] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:37:17.534 [2024-07-15 07:45:55.918378] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:37:17.535 [2024-07-15 07:45:55.918394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.535 [2024-07-15 07:45:55.918407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:37:17.535 [2024-07-15 07:45:55.918422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:37:17.535 [2024-07-15 07:45:55.918437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.535 [2024-07-15 07:45:55.918571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.535 [2024-07-15 07:45:55.918601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:37:17.535 [2024-07-15 07:45:55.918620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:37:17.535 [2024-07-15 07:45:55.918633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.535 [2024-07-15 07:45:55.918745] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:37:17.535 [2024-07-15 07:45:55.918771] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:37:17.535 [2024-07-15 07:45:55.918790] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:37:17.535 [2024-07-15 07:45:55.918803] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:17.535 [2024-07-15 07:45:55.918834] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:37:17.535 [2024-07-15 07:45:55.918847] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:37:17.535 [2024-07-15 07:45:55.918864] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:37:17.535 [2024-07-15 07:45:55.918877] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:37:17.535 [2024-07-15 07:45:55.918893] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:37:17.535 [2024-07-15 07:45:55.918905] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:37:17.535 [2024-07-15 07:45:55.918919] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:37:17.535 [2024-07-15 07:45:55.918930] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:37:17.535 [2024-07-15 07:45:55.918957] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:37:17.535 [2024-07-15 07:45:55.918970] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:37:17.535 [2024-07-15 07:45:55.918986] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:37:17.535 [2024-07-15 07:45:55.918997] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:17.535 [2024-07-15 07:45:55.919014] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:37:17.535 [2024-07-15 07:45:55.919025] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:37:17.535 [2024-07-15 07:45:55.919055] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:17.535 [2024-07-15 07:45:55.919066] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:37:17.535 [2024-07-15 07:45:55.919080] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:37:17.535 [2024-07-15 07:45:55.919091] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:17.535 [2024-07-15 07:45:55.919104] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:37:17.535 [2024-07-15 07:45:55.919117] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:37:17.535 [2024-07-15 07:45:55.919132] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:17.535 [2024-07-15 07:45:55.919143] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:37:17.535 [2024-07-15 07:45:55.919156] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:37:17.535 [2024-07-15 07:45:55.919168] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:17.535 [2024-07-15 07:45:55.919181] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:37:17.535 [2024-07-15 07:45:55.919193] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:37:17.535 [2024-07-15 07:45:55.919206] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:17.535 [2024-07-15 07:45:55.919217] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:37:17.535 [2024-07-15 07:45:55.919235] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:37:17.535 [2024-07-15 07:45:55.919255] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:37:17.535 [2024-07-15 07:45:55.919270] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:37:17.535 [2024-07-15 07:45:55.919281] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:37:17.535 [2024-07-15 07:45:55.919295] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:37:17.535 [2024-07-15 07:45:55.919306] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:37:17.535 [2024-07-15 07:45:55.919321] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:37:17.535 [2024-07-15 07:45:55.919332] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:17.535 [2024-07-15 07:45:55.919347] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:37:17.535 [2024-07-15 07:45:55.919358] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:37:17.535 [2024-07-15 07:45:55.919371] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:17.535 [2024-07-15 07:45:55.919382] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:37:17.535 [2024-07-15 07:45:55.919397] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:37:17.535 [2024-07-15 07:45:55.919409] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:37:17.535 [2024-07-15 07:45:55.919424] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:17.535 [2024-07-15 07:45:55.919436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:37:17.535 [2024-07-15 07:45:55.919469] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:37:17.535 [2024-07-15 07:45:55.919485] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:37:17.535 [2024-07-15 07:45:55.919500] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:37:17.535 [2024-07-15 07:45:55.919511] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:37:17.535 [2024-07-15 07:45:55.919525] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:37:17.535 [2024-07-15 07:45:55.919543] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:37:17.535 [2024-07-15 07:45:55.919561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:17.535 [2024-07-15 07:45:55.919577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:37:17.535 [2024-07-15 07:45:55.919599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:37:17.535 [2024-07-15 07:45:55.919613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:37:17.535 [2024-07-15 07:45:55.919631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:37:17.535 [2024-07-15 07:45:55.919645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:37:17.535 [2024-07-15 07:45:55.919663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:37:17.535 [2024-07-15 07:45:55.919676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:37:17.535 [2024-07-15 07:45:55.919694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:37:17.535 [2024-07-15 07:45:55.919708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:37:17.535 [2024-07-15 07:45:55.919735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:37:17.535 [2024-07-15 07:45:55.919749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:37:17.535 [2024-07-15 07:45:55.919767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:37:17.535 [2024-07-15 07:45:55.919780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:37:17.535 [2024-07-15 07:45:55.919798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:37:17.535 [2024-07-15 07:45:55.919812] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:37:17.535 [2024-07-15 07:45:55.919829] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:17.535 [2024-07-15 07:45:55.919843] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:37:17.535 [2024-07-15 07:45:55.919858] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:37:17.535 [2024-07-15 07:45:55.919871] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:37:17.535 [2024-07-15 07:45:55.919886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:37:17.535 [2024-07-15 07:45:55.919899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.535 [2024-07-15 07:45:55.919914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:37:17.535 [2024-07-15 07:45:55.919931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.226 ms 00:37:17.535 [2024-07-15 07:45:55.919945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.535 [2024-07-15 07:45:55.920009] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:37:17.535 [2024-07-15 07:45:55.920036] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:37:21.717 [2024-07-15 07:45:59.944266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.717 [2024-07-15 07:45:59.944367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:37:21.717 [2024-07-15 07:45:59.944393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4024.281 ms 00:37:21.717 [2024-07-15 07:45:59.944415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.717 [2024-07-15 07:45:59.996604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.717 [2024-07-15 07:45:59.996701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:37:21.717 [2024-07-15 07:45:59.996732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.845 ms 00:37:21.717 [2024-07-15 07:45:59.996749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.717 [2024-07-15 07:45:59.996982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.717 [2024-07-15 07:45:59.997011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:37:21.717 [2024-07-15 07:45:59.997026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:37:21.717 [2024-07-15 07:45:59.997045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.717 [2024-07-15 07:46:00.045809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.717 [2024-07-15 07:46:00.045896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:37:21.717 [2024-07-15 07:46:00.045920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.704 ms 00:37:21.717 [2024-07-15 07:46:00.045937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.717 [2024-07-15 07:46:00.046009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.717 [2024-07-15 07:46:00.046037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:37:21.717 [2024-07-15 07:46:00.046051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:37:21.717 [2024-07-15 07:46:00.046066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.717 [2024-07-15 07:46:00.046909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.717 [2024-07-15 07:46:00.046966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:37:21.717 [2024-07-15 07:46:00.046984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.745 ms 00:37:21.717 [2024-07-15 07:46:00.047000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.717 [2024-07-15 07:46:00.047183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.717 [2024-07-15 07:46:00.047215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:37:21.717 [2024-07-15 07:46:00.047241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:37:21.717 [2024-07-15 07:46:00.047259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.717 [2024-07-15 07:46:00.067724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.717 [2024-07-15 07:46:00.067819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:37:21.717 [2024-07-15 07:46:00.067845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.431 ms 00:37:21.717 [2024-07-15 07:46:00.067862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.717 [2024-07-15 07:46:00.084539] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:37:21.717 [2024-07-15 07:46:00.094523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.717 [2024-07-15 07:46:00.094595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:37:21.717 [2024-07-15 07:46:00.094623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.481 ms 00:37:21.717 [2024-07-15 07:46:00.094636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.717 [2024-07-15 07:46:00.165834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.717 [2024-07-15 07:46:00.165960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:37:21.717 [2024-07-15 07:46:00.165991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.118 ms 00:37:21.717 [2024-07-15 07:46:00.166006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.717 [2024-07-15 07:46:00.166324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.717 [2024-07-15 07:46:00.166349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:37:21.717 [2024-07-15 07:46:00.166371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:37:21.717 [2024-07-15 07:46:00.166384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.717 [2024-07-15 07:46:00.199084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.717 [2024-07-15 07:46:00.199192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:37:21.717 [2024-07-15 07:46:00.199221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.573 ms 00:37:21.717 [2024-07-15 07:46:00.199235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.717 [2024-07-15 07:46:00.229764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.717 [2024-07-15 07:46:00.229847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:37:21.717 [2024-07-15 07:46:00.229875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.450 ms 00:37:21.717 [2024-07-15 07:46:00.229888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.717 [2024-07-15 07:46:00.230905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.717 [2024-07-15 07:46:00.230944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:37:21.717 [2024-07-15 07:46:00.230979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.933 ms 00:37:21.717 [2024-07-15 07:46:00.230992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.973 [2024-07-15 07:46:00.330265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.973 [2024-07-15 07:46:00.330370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:37:21.973 [2024-07-15 07:46:00.330404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.168 ms 00:37:21.973 [2024-07-15 07:46:00.330419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.973 [2024-07-15 07:46:00.365700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.973 [2024-07-15 07:46:00.365800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:37:21.973 [2024-07-15 07:46:00.365829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.179 ms 00:37:21.973 [2024-07-15 07:46:00.365843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.973 [2024-07-15 07:46:00.400644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.973 [2024-07-15 07:46:00.400725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:37:21.973 [2024-07-15 07:46:00.400753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.704 ms 00:37:21.973 [2024-07-15 07:46:00.400768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.973 [2024-07-15 07:46:00.433213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.973 [2024-07-15 07:46:00.433288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:37:21.973 [2024-07-15 07:46:00.433314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.345 ms 00:37:21.973 [2024-07-15 07:46:00.433328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.973 [2024-07-15 07:46:00.433416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.973 [2024-07-15 07:46:00.433436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:37:21.974 [2024-07-15 07:46:00.433481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:37:21.974 [2024-07-15 07:46:00.433498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.974 [2024-07-15 07:46:00.433654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.974 [2024-07-15 07:46:00.433675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:37:21.974 [2024-07-15 07:46:00.433699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:37:21.974 [2024-07-15 07:46:00.433712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.974 [2024-07-15 07:46:00.435303] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4546.188 ms, result 0 00:37:21.974 { 00:37:21.974 "name": "ftl0", 00:37:21.974 "uuid": "a7e95a3e-c0b4-4d03-a37c-3f49ed06fb69" 00:37:21.974 } 00:37:21.974 07:46:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:37:21.974 07:46:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:37:21.974 07:46:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:37:22.230 07:46:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:37:22.487 [2024-07-15 07:46:00.851546] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:37:22.487 I/O size of 69632 is greater than zero copy threshold (65536). 00:37:22.487 Zero copy mechanism will not be used. 00:37:22.487 Running I/O for 4 seconds... 00:37:26.668 00:37:26.668 Latency(us) 00:37:26.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:26.668 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:37:26.668 ftl0 : 4.00 1887.53 125.34 0.00 0.00 553.73 242.04 4915.20 00:37:26.668 =================================================================================================================== 00:37:26.668 Total : 1887.53 125.34 0.00 0.00 553.73 242.04 4915.20 00:37:26.668 [2024-07-15 07:46:04.863771] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:37:26.668 0 00:37:26.668 07:46:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:37:26.668 [2024-07-15 07:46:05.004933] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:37:26.668 Running I/O for 4 seconds... 00:37:30.866 00:37:30.866 Latency(us) 00:37:30.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:30.866 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:37:30.866 ftl0 : 4.02 7623.30 29.78 0.00 0.00 16743.10 351.88 34317.03 00:37:30.866 =================================================================================================================== 00:37:30.866 Total : 7623.30 29.78 0.00 0.00 16743.10 0.00 34317.03 00:37:30.866 [2024-07-15 07:46:09.037516] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:37:30.866 0 00:37:30.866 07:46:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:37:30.866 [2024-07-15 07:46:09.195065] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:37:30.866 Running I/O for 4 seconds... 00:37:35.051 00:37:35.051 Latency(us) 00:37:35.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:35.051 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:35.051 Verification LBA range: start 0x0 length 0x1400000 00:37:35.051 ftl0 : 4.01 5955.92 23.27 0.00 0.00 21414.51 377.95 53143.74 00:37:35.051 =================================================================================================================== 00:37:35.051 Total : 5955.92 23.27 0.00 0.00 21414.51 0.00 53143.74 00:37:35.051 [2024-07-15 07:46:13.229561] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:37:35.051 0 00:37:35.051 07:46:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:37:35.051 [2024-07-15 07:46:13.464698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:35.051 [2024-07-15 07:46:13.464796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:37:35.051 [2024-07-15 07:46:13.464842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:37:35.051 [2024-07-15 07:46:13.464856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:35.051 [2024-07-15 07:46:13.464903] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:37:35.051 [2024-07-15 07:46:13.468886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:35.051 [2024-07-15 07:46:13.468941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:37:35.051 [2024-07-15 07:46:13.468957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.955 ms 00:37:35.051 [2024-07-15 07:46:13.468975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:35.051 [2024-07-15 07:46:13.470861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:35.051 [2024-07-15 07:46:13.470968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:37:35.051 [2024-07-15 07:46:13.470988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.856 ms 00:37:35.051 [2024-07-15 07:46:13.471004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:35.051 [2024-07-15 07:46:13.654460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:35.051 [2024-07-15 07:46:13.654595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:37:35.051 [2024-07-15 07:46:13.654627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 183.424 ms 00:37:35.051 [2024-07-15 07:46:13.654651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:35.052 [2024-07-15 07:46:13.661227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:35.052 [2024-07-15 07:46:13.661277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:37:35.052 [2024-07-15 07:46:13.661295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.526 ms 00:37:35.052 [2024-07-15 07:46:13.661311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:35.312 [2024-07-15 07:46:13.694259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:35.312 [2024-07-15 07:46:13.694341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:37:35.312 [2024-07-15 07:46:13.694364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.838 ms 00:37:35.312 [2024-07-15 07:46:13.694381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:35.312 [2024-07-15 07:46:13.715717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:35.312 [2024-07-15 07:46:13.715815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:37:35.312 [2024-07-15 07:46:13.715840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.237 ms 00:37:35.312 [2024-07-15 07:46:13.715862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:35.312 [2024-07-15 07:46:13.716127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:35.312 [2024-07-15 07:46:13.716166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:37:35.312 [2024-07-15 07:46:13.716183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:37:35.312 [2024-07-15 07:46:13.716204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:35.312 [2024-07-15 07:46:13.748456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:35.312 [2024-07-15 07:46:13.748560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:37:35.312 [2024-07-15 07:46:13.748582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.226 ms 00:37:35.312 [2024-07-15 07:46:13.748598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:35.312 [2024-07-15 07:46:13.779816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:35.312 [2024-07-15 07:46:13.779904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:37:35.312 [2024-07-15 07:46:13.779923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.166 ms 00:37:35.312 [2024-07-15 07:46:13.779939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:35.312 [2024-07-15 07:46:13.810386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:35.312 [2024-07-15 07:46:13.810482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:37:35.312 [2024-07-15 07:46:13.810502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.398 ms 00:37:35.312 [2024-07-15 07:46:13.810518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:35.312 [2024-07-15 07:46:13.840869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:35.312 [2024-07-15 07:46:13.840932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:37:35.312 [2024-07-15 07:46:13.840962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.235 ms 00:37:35.312 [2024-07-15 07:46:13.840981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:35.312 [2024-07-15 07:46:13.841036] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:37:35.312 [2024-07-15 07:46:13.841068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:37:35.312 [2024-07-15 07:46:13.841945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.841960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.841974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.841989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:37:35.313 [2024-07-15 07:46:13.842607] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:37:35.313 [2024-07-15 07:46:13.842620] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a7e95a3e-c0b4-4d03-a37c-3f49ed06fb69 00:37:35.313 [2024-07-15 07:46:13.842645] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:37:35.313 [2024-07-15 07:46:13.842659] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:37:35.313 [2024-07-15 07:46:13.842673] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:37:35.313 [2024-07-15 07:46:13.842686] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:37:35.313 [2024-07-15 07:46:13.842705] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:37:35.313 [2024-07-15 07:46:13.842717] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:37:35.313 [2024-07-15 07:46:13.842732] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:37:35.313 [2024-07-15 07:46:13.842743] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:37:35.313 [2024-07-15 07:46:13.842759] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:37:35.313 [2024-07-15 07:46:13.842772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:35.313 [2024-07-15 07:46:13.842787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:37:35.313 [2024-07-15 07:46:13.842800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.738 ms 00:37:35.313 [2024-07-15 07:46:13.842815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:35.313 [2024-07-15 07:46:13.861041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:35.313 [2024-07-15 07:46:13.861156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:37:35.313 [2024-07-15 07:46:13.861182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.155 ms 00:37:35.313 [2024-07-15 07:46:13.861199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:35.313 [2024-07-15 07:46:13.861779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:35.313 [2024-07-15 07:46:13.861814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:37:35.313 [2024-07-15 07:46:13.861831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:37:35.313 [2024-07-15 07:46:13.861846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:35.313 [2024-07-15 07:46:13.906156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:35.313 [2024-07-15 07:46:13.906252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:37:35.313 [2024-07-15 07:46:13.906288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:35.313 [2024-07-15 07:46:13.906307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:35.313 [2024-07-15 07:46:13.906407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:35.313 [2024-07-15 07:46:13.906427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:37:35.313 [2024-07-15 07:46:13.906441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:35.313 [2024-07-15 07:46:13.906456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:35.313 [2024-07-15 07:46:13.906608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:35.313 [2024-07-15 07:46:13.906635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:37:35.313 [2024-07-15 07:46:13.906650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:35.313 [2024-07-15 07:46:13.906669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:35.313 [2024-07-15 07:46:13.906695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:35.313 [2024-07-15 07:46:13.906713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:37:35.313 [2024-07-15 07:46:13.906725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:35.313 [2024-07-15 07:46:13.906740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:35.572 [2024-07-15 07:46:14.022548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:35.572 [2024-07-15 07:46:14.022625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:37:35.572 [2024-07-15 07:46:14.022649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:35.572 [2024-07-15 07:46:14.022668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:35.572 [2024-07-15 07:46:14.113588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:35.572 [2024-07-15 07:46:14.113685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:37:35.572 [2024-07-15 07:46:14.113709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:35.572 [2024-07-15 07:46:14.113725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:35.572 [2024-07-15 07:46:14.113865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:35.572 [2024-07-15 07:46:14.113891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:37:35.572 [2024-07-15 07:46:14.113905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:35.572 [2024-07-15 07:46:14.113932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:35.572 [2024-07-15 07:46:14.114003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:35.572 [2024-07-15 07:46:14.114026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:37:35.572 [2024-07-15 07:46:14.114039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:35.572 [2024-07-15 07:46:14.114055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:35.572 [2024-07-15 07:46:14.114193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:35.572 [2024-07-15 07:46:14.114229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:37:35.572 [2024-07-15 07:46:14.114245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:35.572 [2024-07-15 07:46:14.114264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:35.572 [2024-07-15 07:46:14.114328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:35.572 [2024-07-15 07:46:14.114361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:37:35.572 [2024-07-15 07:46:14.114377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:35.572 [2024-07-15 07:46:14.114392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:35.572 [2024-07-15 07:46:14.114446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:35.572 [2024-07-15 07:46:14.114484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:37:35.572 [2024-07-15 07:46:14.114497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:35.572 [2024-07-15 07:46:14.114518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:35.572 [2024-07-15 07:46:14.114587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:35.572 [2024-07-15 07:46:14.114609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:37:35.572 [2024-07-15 07:46:14.114623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:35.572 [2024-07-15 07:46:14.114639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:35.572 [2024-07-15 07:46:14.114821] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 650.077 ms, result 0 00:37:35.572 true 00:37:35.572 07:46:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 80644 00:37:35.572 07:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 80644 ']' 00:37:35.572 07:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # kill -0 80644 00:37:35.572 07:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@953 -- # uname 00:37:35.572 07:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:35.572 07:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80644 00:37:35.572 07:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:35.572 07:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:35.572 killing process with pid 80644 00:37:35.572 07:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80644' 00:37:35.572 Received shutdown signal, test time was about 4.000000 seconds 00:37:35.572 00:37:35.572 Latency(us) 00:37:35.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:35.572 =================================================================================================================== 00:37:35.572 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:35.572 07:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@967 -- # kill 80644 00:37:35.572 07:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # wait 80644 00:37:36.949 07:46:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:37:36.949 07:46:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:37:36.949 07:46:15 ftl.ftl_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:36.949 07:46:15 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:36.949 07:46:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:37:36.949 Remove shared memory files 00:37:36.949 07:46:15 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:37:36.949 07:46:15 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:37:36.949 07:46:15 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:37:36.949 07:46:15 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:37:36.949 07:46:15 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:37:36.949 07:46:15 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:37:36.949 ************************************ 00:37:36.949 END TEST ftl_bdevperf 00:37:36.949 ************************************ 00:37:36.949 00:37:36.949 real 0m24.673s 00:37:36.949 user 0m28.184s 00:37:36.949 sys 0m1.412s 00:37:36.949 07:46:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:36.949 07:46:15 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:36.949 07:46:15 ftl -- common/autotest_common.sh@1142 -- # return 0 00:37:36.949 07:46:15 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:37:36.949 07:46:15 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:37:36.949 07:46:15 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:36.949 07:46:15 ftl -- common/autotest_common.sh@10 -- # set +x 00:37:37.208 ************************************ 00:37:37.208 START TEST ftl_trim 00:37:37.208 ************************************ 00:37:37.208 07:46:15 ftl.ftl_trim -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:37:37.208 * Looking for test storage... 00:37:37.208 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=81012 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:37:37.208 07:46:15 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 81012 00:37:37.208 07:46:15 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 81012 ']' 00:37:37.208 07:46:15 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:37.208 07:46:15 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:37.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:37.208 07:46:15 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:37.208 07:46:15 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:37.208 07:46:15 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:37:37.466 [2024-07-15 07:46:15.837785] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:37:37.466 [2024-07-15 07:46:15.837997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81012 ] 00:37:37.466 [2024-07-15 07:46:16.012612] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:38.033 [2024-07-15 07:46:16.371200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:38.033 [2024-07-15 07:46:16.371358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:38.033 [2024-07-15 07:46:16.371359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:38.968 07:46:17 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:38.968 07:46:17 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:37:38.968 07:46:17 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:37:38.968 07:46:17 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:37:38.968 07:46:17 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:37:38.968 07:46:17 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:37:38.968 07:46:17 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:37:38.968 07:46:17 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:37:39.225 07:46:17 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:37:39.226 07:46:17 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:37:39.226 07:46:17 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:37:39.226 07:46:17 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:37:39.226 07:46:17 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:37:39.226 07:46:17 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:37:39.226 07:46:17 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:37:39.226 07:46:17 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:37:39.483 07:46:18 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:37:39.483 { 00:37:39.483 "name": "nvme0n1", 00:37:39.483 "aliases": [ 00:37:39.483 "e1857b7e-6a5f-4e53-bc3c-8e446c43ab48" 00:37:39.483 ], 00:37:39.483 "product_name": "NVMe disk", 00:37:39.483 "block_size": 4096, 00:37:39.483 "num_blocks": 1310720, 00:37:39.483 "uuid": "e1857b7e-6a5f-4e53-bc3c-8e446c43ab48", 00:37:39.483 "assigned_rate_limits": { 00:37:39.483 "rw_ios_per_sec": 0, 00:37:39.483 "rw_mbytes_per_sec": 0, 00:37:39.483 "r_mbytes_per_sec": 0, 00:37:39.483 "w_mbytes_per_sec": 0 00:37:39.483 }, 00:37:39.483 "claimed": true, 00:37:39.483 "claim_type": "read_many_write_one", 00:37:39.483 "zoned": false, 00:37:39.483 "supported_io_types": { 00:37:39.483 "read": true, 00:37:39.483 "write": true, 00:37:39.483 "unmap": true, 00:37:39.483 "flush": true, 00:37:39.483 "reset": true, 00:37:39.483 "nvme_admin": true, 00:37:39.483 "nvme_io": true, 00:37:39.483 "nvme_io_md": false, 00:37:39.483 "write_zeroes": true, 00:37:39.483 "zcopy": false, 00:37:39.483 "get_zone_info": false, 00:37:39.483 "zone_management": false, 00:37:39.483 "zone_append": false, 00:37:39.483 "compare": true, 00:37:39.483 "compare_and_write": false, 00:37:39.483 "abort": true, 00:37:39.483 "seek_hole": false, 00:37:39.483 "seek_data": false, 00:37:39.483 "copy": true, 00:37:39.483 "nvme_iov_md": false 00:37:39.483 }, 00:37:39.483 "driver_specific": { 00:37:39.483 "nvme": [ 00:37:39.483 { 00:37:39.483 "pci_address": "0000:00:11.0", 00:37:39.483 "trid": { 00:37:39.483 "trtype": "PCIe", 00:37:39.483 "traddr": "0000:00:11.0" 00:37:39.483 }, 00:37:39.483 "ctrlr_data": { 00:37:39.483 "cntlid": 0, 00:37:39.483 "vendor_id": "0x1b36", 00:37:39.483 "model_number": "QEMU NVMe Ctrl", 00:37:39.483 "serial_number": "12341", 00:37:39.483 "firmware_revision": "8.0.0", 00:37:39.483 "subnqn": "nqn.2019-08.org.qemu:12341", 00:37:39.483 "oacs": { 00:37:39.483 "security": 0, 00:37:39.483 "format": 1, 00:37:39.483 "firmware": 0, 00:37:39.483 "ns_manage": 1 00:37:39.483 }, 00:37:39.483 "multi_ctrlr": false, 00:37:39.483 "ana_reporting": false 00:37:39.483 }, 00:37:39.483 "vs": { 00:37:39.483 "nvme_version": "1.4" 00:37:39.483 }, 00:37:39.483 "ns_data": { 00:37:39.483 "id": 1, 00:37:39.483 "can_share": false 00:37:39.483 } 00:37:39.483 } 00:37:39.483 ], 00:37:39.483 "mp_policy": "active_passive" 00:37:39.483 } 00:37:39.483 } 00:37:39.483 ]' 00:37:39.483 07:46:18 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:37:39.483 07:46:18 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:37:39.483 07:46:18 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:37:39.741 07:46:18 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:37:39.741 07:46:18 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:37:39.741 07:46:18 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:37:39.741 07:46:18 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:37:39.741 07:46:18 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:37:39.741 07:46:18 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:37:39.741 07:46:18 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:37:39.741 07:46:18 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:37:39.999 07:46:18 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=1c186d78-cc6a-400b-9142-c1ebdc0a76a0 00:37:39.999 07:46:18 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:37:39.999 07:46:18 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1c186d78-cc6a-400b-9142-c1ebdc0a76a0 00:37:40.257 07:46:18 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:37:40.515 07:46:18 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=0d71380a-bf2a-481a-a225-94463ceff5fb 00:37:40.515 07:46:18 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0d71380a-bf2a-481a-a225-94463ceff5fb 00:37:40.773 07:46:19 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=36b33f30-1714-4fa6-9a0b-05330de6dba2 00:37:40.773 07:46:19 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 36b33f30-1714-4fa6-9a0b-05330de6dba2 00:37:40.773 07:46:19 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:37:40.773 07:46:19 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:37:40.773 07:46:19 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=36b33f30-1714-4fa6-9a0b-05330de6dba2 00:37:40.773 07:46:19 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:37:40.773 07:46:19 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 36b33f30-1714-4fa6-9a0b-05330de6dba2 00:37:40.773 07:46:19 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=36b33f30-1714-4fa6-9a0b-05330de6dba2 00:37:40.773 07:46:19 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:37:40.773 07:46:19 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:37:40.773 07:46:19 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:37:40.773 07:46:19 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 36b33f30-1714-4fa6-9a0b-05330de6dba2 00:37:41.031 07:46:19 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:37:41.031 { 00:37:41.031 "name": "36b33f30-1714-4fa6-9a0b-05330de6dba2", 00:37:41.031 "aliases": [ 00:37:41.031 "lvs/nvme0n1p0" 00:37:41.031 ], 00:37:41.031 "product_name": "Logical Volume", 00:37:41.031 "block_size": 4096, 00:37:41.031 "num_blocks": 26476544, 00:37:41.031 "uuid": "36b33f30-1714-4fa6-9a0b-05330de6dba2", 00:37:41.031 "assigned_rate_limits": { 00:37:41.031 "rw_ios_per_sec": 0, 00:37:41.031 "rw_mbytes_per_sec": 0, 00:37:41.031 "r_mbytes_per_sec": 0, 00:37:41.031 "w_mbytes_per_sec": 0 00:37:41.031 }, 00:37:41.031 "claimed": false, 00:37:41.031 "zoned": false, 00:37:41.031 "supported_io_types": { 00:37:41.031 "read": true, 00:37:41.031 "write": true, 00:37:41.031 "unmap": true, 00:37:41.031 "flush": false, 00:37:41.031 "reset": true, 00:37:41.031 "nvme_admin": false, 00:37:41.031 "nvme_io": false, 00:37:41.031 "nvme_io_md": false, 00:37:41.031 "write_zeroes": true, 00:37:41.031 "zcopy": false, 00:37:41.031 "get_zone_info": false, 00:37:41.031 "zone_management": false, 00:37:41.031 "zone_append": false, 00:37:41.031 "compare": false, 00:37:41.031 "compare_and_write": false, 00:37:41.031 "abort": false, 00:37:41.031 "seek_hole": true, 00:37:41.031 "seek_data": true, 00:37:41.031 "copy": false, 00:37:41.031 "nvme_iov_md": false 00:37:41.031 }, 00:37:41.031 "driver_specific": { 00:37:41.031 "lvol": { 00:37:41.031 "lvol_store_uuid": "0d71380a-bf2a-481a-a225-94463ceff5fb", 00:37:41.031 "base_bdev": "nvme0n1", 00:37:41.031 "thin_provision": true, 00:37:41.031 "num_allocated_clusters": 0, 00:37:41.031 "snapshot": false, 00:37:41.031 "clone": false, 00:37:41.031 "esnap_clone": false 00:37:41.031 } 00:37:41.031 } 00:37:41.031 } 00:37:41.031 ]' 00:37:41.031 07:46:19 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:37:41.031 07:46:19 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:37:41.031 07:46:19 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:37:41.031 07:46:19 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:37:41.031 07:46:19 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:37:41.031 07:46:19 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:37:41.031 07:46:19 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:37:41.031 07:46:19 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:37:41.031 07:46:19 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:37:41.290 07:46:19 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:37:41.290 07:46:19 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:37:41.290 07:46:19 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 36b33f30-1714-4fa6-9a0b-05330de6dba2 00:37:41.290 07:46:19 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=36b33f30-1714-4fa6-9a0b-05330de6dba2 00:37:41.290 07:46:19 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:37:41.290 07:46:19 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:37:41.290 07:46:19 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:37:41.290 07:46:19 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 36b33f30-1714-4fa6-9a0b-05330de6dba2 00:37:41.548 07:46:20 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:37:41.548 { 00:37:41.548 "name": "36b33f30-1714-4fa6-9a0b-05330de6dba2", 00:37:41.548 "aliases": [ 00:37:41.548 "lvs/nvme0n1p0" 00:37:41.548 ], 00:37:41.548 "product_name": "Logical Volume", 00:37:41.548 "block_size": 4096, 00:37:41.548 "num_blocks": 26476544, 00:37:41.548 "uuid": "36b33f30-1714-4fa6-9a0b-05330de6dba2", 00:37:41.548 "assigned_rate_limits": { 00:37:41.548 "rw_ios_per_sec": 0, 00:37:41.548 "rw_mbytes_per_sec": 0, 00:37:41.548 "r_mbytes_per_sec": 0, 00:37:41.548 "w_mbytes_per_sec": 0 00:37:41.548 }, 00:37:41.548 "claimed": false, 00:37:41.548 "zoned": false, 00:37:41.548 "supported_io_types": { 00:37:41.548 "read": true, 00:37:41.548 "write": true, 00:37:41.548 "unmap": true, 00:37:41.548 "flush": false, 00:37:41.548 "reset": true, 00:37:41.548 "nvme_admin": false, 00:37:41.548 "nvme_io": false, 00:37:41.548 "nvme_io_md": false, 00:37:41.548 "write_zeroes": true, 00:37:41.548 "zcopy": false, 00:37:41.548 "get_zone_info": false, 00:37:41.548 "zone_management": false, 00:37:41.548 "zone_append": false, 00:37:41.548 "compare": false, 00:37:41.548 "compare_and_write": false, 00:37:41.548 "abort": false, 00:37:41.548 "seek_hole": true, 00:37:41.548 "seek_data": true, 00:37:41.548 "copy": false, 00:37:41.548 "nvme_iov_md": false 00:37:41.548 }, 00:37:41.548 "driver_specific": { 00:37:41.548 "lvol": { 00:37:41.548 "lvol_store_uuid": "0d71380a-bf2a-481a-a225-94463ceff5fb", 00:37:41.548 "base_bdev": "nvme0n1", 00:37:41.548 "thin_provision": true, 00:37:41.548 "num_allocated_clusters": 0, 00:37:41.548 "snapshot": false, 00:37:41.548 "clone": false, 00:37:41.548 "esnap_clone": false 00:37:41.548 } 00:37:41.548 } 00:37:41.548 } 00:37:41.548 ]' 00:37:41.548 07:46:20 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:37:41.806 07:46:20 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:37:41.806 07:46:20 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:37:41.806 07:46:20 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:37:41.806 07:46:20 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:37:41.806 07:46:20 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:37:41.806 07:46:20 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:37:41.806 07:46:20 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:37:42.064 07:46:20 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:37:42.064 07:46:20 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:37:42.064 07:46:20 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 36b33f30-1714-4fa6-9a0b-05330de6dba2 00:37:42.064 07:46:20 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=36b33f30-1714-4fa6-9a0b-05330de6dba2 00:37:42.065 07:46:20 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:37:42.065 07:46:20 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:37:42.065 07:46:20 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:37:42.065 07:46:20 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 36b33f30-1714-4fa6-9a0b-05330de6dba2 00:37:42.324 07:46:20 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:37:42.324 { 00:37:42.324 "name": "36b33f30-1714-4fa6-9a0b-05330de6dba2", 00:37:42.324 "aliases": [ 00:37:42.324 "lvs/nvme0n1p0" 00:37:42.324 ], 00:37:42.324 "product_name": "Logical Volume", 00:37:42.324 "block_size": 4096, 00:37:42.324 "num_blocks": 26476544, 00:37:42.324 "uuid": "36b33f30-1714-4fa6-9a0b-05330de6dba2", 00:37:42.324 "assigned_rate_limits": { 00:37:42.324 "rw_ios_per_sec": 0, 00:37:42.324 "rw_mbytes_per_sec": 0, 00:37:42.324 "r_mbytes_per_sec": 0, 00:37:42.324 "w_mbytes_per_sec": 0 00:37:42.324 }, 00:37:42.324 "claimed": false, 00:37:42.324 "zoned": false, 00:37:42.324 "supported_io_types": { 00:37:42.324 "read": true, 00:37:42.324 "write": true, 00:37:42.324 "unmap": true, 00:37:42.324 "flush": false, 00:37:42.324 "reset": true, 00:37:42.324 "nvme_admin": false, 00:37:42.324 "nvme_io": false, 00:37:42.324 "nvme_io_md": false, 00:37:42.324 "write_zeroes": true, 00:37:42.324 "zcopy": false, 00:37:42.324 "get_zone_info": false, 00:37:42.324 "zone_management": false, 00:37:42.324 "zone_append": false, 00:37:42.324 "compare": false, 00:37:42.324 "compare_and_write": false, 00:37:42.324 "abort": false, 00:37:42.324 "seek_hole": true, 00:37:42.324 "seek_data": true, 00:37:42.324 "copy": false, 00:37:42.324 "nvme_iov_md": false 00:37:42.324 }, 00:37:42.324 "driver_specific": { 00:37:42.324 "lvol": { 00:37:42.324 "lvol_store_uuid": "0d71380a-bf2a-481a-a225-94463ceff5fb", 00:37:42.324 "base_bdev": "nvme0n1", 00:37:42.324 "thin_provision": true, 00:37:42.324 "num_allocated_clusters": 0, 00:37:42.324 "snapshot": false, 00:37:42.324 "clone": false, 00:37:42.324 "esnap_clone": false 00:37:42.324 } 00:37:42.324 } 00:37:42.324 } 00:37:42.324 ]' 00:37:42.324 07:46:20 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:37:42.324 07:46:20 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:37:42.324 07:46:20 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:37:42.324 07:46:20 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:37:42.324 07:46:20 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:37:42.324 07:46:20 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:37:42.324 07:46:20 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:37:42.324 07:46:20 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 36b33f30-1714-4fa6-9a0b-05330de6dba2 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:37:42.583 [2024-07-15 07:46:21.100154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:42.583 [2024-07-15 07:46:21.100248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:37:42.583 [2024-07-15 07:46:21.100270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:37:42.583 [2024-07-15 07:46:21.100288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:42.583 [2024-07-15 07:46:21.104417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:42.583 [2024-07-15 07:46:21.104494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:37:42.583 [2024-07-15 07:46:21.104513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.068 ms 00:37:42.583 [2024-07-15 07:46:21.104529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:42.583 [2024-07-15 07:46:21.104770] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:37:42.583 [2024-07-15 07:46:21.105874] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:37:42.583 [2024-07-15 07:46:21.105916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:42.583 [2024-07-15 07:46:21.105941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:37:42.583 [2024-07-15 07:46:21.105956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.194 ms 00:37:42.583 [2024-07-15 07:46:21.105971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:42.583 [2024-07-15 07:46:21.106219] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d442620d-548b-4e89-8b2c-9e30b59e312d 00:37:42.583 [2024-07-15 07:46:21.108969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:42.583 [2024-07-15 07:46:21.109008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:37:42.583 [2024-07-15 07:46:21.109029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:37:42.583 [2024-07-15 07:46:21.109042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:42.583 [2024-07-15 07:46:21.123918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:42.583 [2024-07-15 07:46:21.123982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:37:42.583 [2024-07-15 07:46:21.124005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.772 ms 00:37:42.583 [2024-07-15 07:46:21.124019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:42.583 [2024-07-15 07:46:21.124314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:42.583 [2024-07-15 07:46:21.124342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:37:42.583 [2024-07-15 07:46:21.124360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:37:42.583 [2024-07-15 07:46:21.124373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:42.583 [2024-07-15 07:46:21.124436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:42.583 [2024-07-15 07:46:21.124474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:37:42.583 [2024-07-15 07:46:21.124514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:37:42.583 [2024-07-15 07:46:21.124527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:42.583 [2024-07-15 07:46:21.124581] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:37:42.583 [2024-07-15 07:46:21.130774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:42.583 [2024-07-15 07:46:21.130847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:37:42.583 [2024-07-15 07:46:21.130864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.210 ms 00:37:42.583 [2024-07-15 07:46:21.130879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:42.583 [2024-07-15 07:46:21.130970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:42.583 [2024-07-15 07:46:21.130994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:37:42.583 [2024-07-15 07:46:21.131008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:37:42.583 [2024-07-15 07:46:21.131023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:42.583 [2024-07-15 07:46:21.131063] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:37:42.583 [2024-07-15 07:46:21.131241] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:37:42.583 [2024-07-15 07:46:21.131265] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:37:42.583 [2024-07-15 07:46:21.131288] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:37:42.583 [2024-07-15 07:46:21.131304] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:37:42.583 [2024-07-15 07:46:21.131321] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:37:42.583 [2024-07-15 07:46:21.131334] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:37:42.583 [2024-07-15 07:46:21.131348] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:37:42.583 [2024-07-15 07:46:21.131365] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:37:42.583 [2024-07-15 07:46:21.131403] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:37:42.583 [2024-07-15 07:46:21.131417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:42.583 [2024-07-15 07:46:21.131447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:37:42.583 [2024-07-15 07:46:21.131475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:37:42.583 [2024-07-15 07:46:21.131521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:42.583 [2024-07-15 07:46:21.131658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:42.583 [2024-07-15 07:46:21.131677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:37:42.583 [2024-07-15 07:46:21.131691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:37:42.583 [2024-07-15 07:46:21.131705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:42.583 [2024-07-15 07:46:21.131846] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:37:42.583 [2024-07-15 07:46:21.131869] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:37:42.583 [2024-07-15 07:46:21.131882] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:37:42.583 [2024-07-15 07:46:21.131898] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:42.583 [2024-07-15 07:46:21.131911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:37:42.583 [2024-07-15 07:46:21.131924] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:37:42.583 [2024-07-15 07:46:21.131936] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:37:42.583 [2024-07-15 07:46:21.131950] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:37:42.583 [2024-07-15 07:46:21.131961] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:37:42.583 [2024-07-15 07:46:21.131975] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:37:42.583 [2024-07-15 07:46:21.131986] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:37:42.583 [2024-07-15 07:46:21.132000] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:37:42.583 [2024-07-15 07:46:21.132011] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:37:42.583 [2024-07-15 07:46:21.132026] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:37:42.583 [2024-07-15 07:46:21.132038] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:37:42.583 [2024-07-15 07:46:21.132052] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:42.583 [2024-07-15 07:46:21.132075] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:37:42.583 [2024-07-15 07:46:21.132093] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:37:42.583 [2024-07-15 07:46:21.132104] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:42.583 [2024-07-15 07:46:21.132134] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:37:42.583 [2024-07-15 07:46:21.132145] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:37:42.583 [2024-07-15 07:46:21.132159] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:42.583 [2024-07-15 07:46:21.132169] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:37:42.583 [2024-07-15 07:46:21.132183] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:37:42.583 [2024-07-15 07:46:21.132194] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:42.583 [2024-07-15 07:46:21.132208] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:37:42.583 [2024-07-15 07:46:21.132218] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:37:42.583 [2024-07-15 07:46:21.132233] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:42.583 [2024-07-15 07:46:21.132244] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:37:42.584 [2024-07-15 07:46:21.132257] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:37:42.584 [2024-07-15 07:46:21.132278] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:42.584 [2024-07-15 07:46:21.132291] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:37:42.584 [2024-07-15 07:46:21.132302] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:37:42.584 [2024-07-15 07:46:21.132319] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:37:42.584 [2024-07-15 07:46:21.132330] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:37:42.584 [2024-07-15 07:46:21.132344] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:37:42.584 [2024-07-15 07:46:21.132355] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:37:42.584 [2024-07-15 07:46:21.132369] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:37:42.584 [2024-07-15 07:46:21.132380] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:37:42.584 [2024-07-15 07:46:21.132395] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:42.584 [2024-07-15 07:46:21.132406] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:37:42.584 [2024-07-15 07:46:21.132420] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:37:42.584 [2024-07-15 07:46:21.132431] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:42.584 [2024-07-15 07:46:21.132445] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:37:42.584 [2024-07-15 07:46:21.132456] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:37:42.584 [2024-07-15 07:46:21.132487] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:37:42.584 [2024-07-15 07:46:21.132511] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:42.584 [2024-07-15 07:46:21.132529] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:37:42.584 [2024-07-15 07:46:21.132550] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:37:42.584 [2024-07-15 07:46:21.132568] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:37:42.584 [2024-07-15 07:46:21.132580] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:37:42.584 [2024-07-15 07:46:21.132594] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:37:42.584 [2024-07-15 07:46:21.132605] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:37:42.584 [2024-07-15 07:46:21.132625] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:37:42.584 [2024-07-15 07:46:21.132652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:42.584 [2024-07-15 07:46:21.132669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:37:42.584 [2024-07-15 07:46:21.132681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:37:42.584 [2024-07-15 07:46:21.132696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:37:42.584 [2024-07-15 07:46:21.132708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:37:42.584 [2024-07-15 07:46:21.132723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:37:42.584 [2024-07-15 07:46:21.132735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:37:42.584 [2024-07-15 07:46:21.132750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:37:42.584 [2024-07-15 07:46:21.132762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:37:42.584 [2024-07-15 07:46:21.132779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:37:42.584 [2024-07-15 07:46:21.132791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:37:42.584 [2024-07-15 07:46:21.132810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:37:42.584 [2024-07-15 07:46:21.132822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:37:42.584 [2024-07-15 07:46:21.132837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:37:42.584 [2024-07-15 07:46:21.132850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:37:42.584 [2024-07-15 07:46:21.132865] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:37:42.584 [2024-07-15 07:46:21.132878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:42.584 [2024-07-15 07:46:21.132895] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:37:42.584 [2024-07-15 07:46:21.132907] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:37:42.584 [2024-07-15 07:46:21.132922] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:37:42.584 [2024-07-15 07:46:21.132934] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:37:42.584 [2024-07-15 07:46:21.132951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:42.584 [2024-07-15 07:46:21.132964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:37:42.584 [2024-07-15 07:46:21.132981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.166 ms 00:37:42.584 [2024-07-15 07:46:21.132993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:42.584 [2024-07-15 07:46:21.133095] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:37:42.584 [2024-07-15 07:46:21.133119] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:37:45.113 [2024-07-15 07:46:23.563509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.113 [2024-07-15 07:46:23.563624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:37:45.113 [2024-07-15 07:46:23.563653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2430.412 ms 00:37:45.113 [2024-07-15 07:46:23.563668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.113 [2024-07-15 07:46:23.611167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.113 [2024-07-15 07:46:23.611241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:37:45.113 [2024-07-15 07:46:23.611269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.099 ms 00:37:45.113 [2024-07-15 07:46:23.611283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.113 [2024-07-15 07:46:23.611571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.113 [2024-07-15 07:46:23.611592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:37:45.113 [2024-07-15 07:46:23.611611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:37:45.113 [2024-07-15 07:46:23.611628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.113 [2024-07-15 07:46:23.672402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.113 [2024-07-15 07:46:23.672515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:37:45.113 [2024-07-15 07:46:23.672545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.711 ms 00:37:45.113 [2024-07-15 07:46:23.672562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.113 [2024-07-15 07:46:23.672769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.113 [2024-07-15 07:46:23.672794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:37:45.113 [2024-07-15 07:46:23.672815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:37:45.113 [2024-07-15 07:46:23.672842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.113 [2024-07-15 07:46:23.673650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.113 [2024-07-15 07:46:23.673679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:37:45.113 [2024-07-15 07:46:23.673700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:37:45.113 [2024-07-15 07:46:23.673716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.113 [2024-07-15 07:46:23.673948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.113 [2024-07-15 07:46:23.673967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:37:45.113 [2024-07-15 07:46:23.673987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:37:45.113 [2024-07-15 07:46:23.674003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.113 [2024-07-15 07:46:23.700817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.113 [2024-07-15 07:46:23.700884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:37:45.113 [2024-07-15 07:46:23.700910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.755 ms 00:37:45.113 [2024-07-15 07:46:23.700924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.113 [2024-07-15 07:46:23.717352] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:37:45.371 [2024-07-15 07:46:23.746054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.371 [2024-07-15 07:46:23.746162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:37:45.371 [2024-07-15 07:46:23.746189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.919 ms 00:37:45.371 [2024-07-15 07:46:23.746207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.371 [2024-07-15 07:46:23.824017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.371 [2024-07-15 07:46:23.824125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:37:45.371 [2024-07-15 07:46:23.824148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.625 ms 00:37:45.371 [2024-07-15 07:46:23.824166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.371 [2024-07-15 07:46:23.824553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.371 [2024-07-15 07:46:23.824580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:37:45.371 [2024-07-15 07:46:23.824595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:37:45.372 [2024-07-15 07:46:23.824615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.372 [2024-07-15 07:46:23.855559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.372 [2024-07-15 07:46:23.855623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:37:45.372 [2024-07-15 07:46:23.855644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.898 ms 00:37:45.372 [2024-07-15 07:46:23.855660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.372 [2024-07-15 07:46:23.886124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.372 [2024-07-15 07:46:23.886200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:37:45.372 [2024-07-15 07:46:23.886223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.349 ms 00:37:45.372 [2024-07-15 07:46:23.886241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.372 [2024-07-15 07:46:23.887293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.372 [2024-07-15 07:46:23.887328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:37:45.372 [2024-07-15 07:46:23.887345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.923 ms 00:37:45.372 [2024-07-15 07:46:23.887365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.630 [2024-07-15 07:46:23.984737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.630 [2024-07-15 07:46:23.984822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:37:45.630 [2024-07-15 07:46:23.984846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.322 ms 00:37:45.630 [2024-07-15 07:46:23.984867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.630 [2024-07-15 07:46:24.018834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.630 [2024-07-15 07:46:24.018922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:37:45.630 [2024-07-15 07:46:24.018951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.850 ms 00:37:45.630 [2024-07-15 07:46:24.018987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.630 [2024-07-15 07:46:24.051808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.630 [2024-07-15 07:46:24.051881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:37:45.630 [2024-07-15 07:46:24.051903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.697 ms 00:37:45.630 [2024-07-15 07:46:24.051923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.630 [2024-07-15 07:46:24.083347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.630 [2024-07-15 07:46:24.083416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:37:45.630 [2024-07-15 07:46:24.083438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.318 ms 00:37:45.630 [2024-07-15 07:46:24.083467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.630 [2024-07-15 07:46:24.083583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.630 [2024-07-15 07:46:24.083616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:37:45.630 [2024-07-15 07:46:24.083633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:37:45.630 [2024-07-15 07:46:24.083659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.630 [2024-07-15 07:46:24.083774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.630 [2024-07-15 07:46:24.083799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:37:45.630 [2024-07-15 07:46:24.083815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:37:45.630 [2024-07-15 07:46:24.083860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.630 [2024-07-15 07:46:24.085357] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:37:45.630 [2024-07-15 07:46:24.089761] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2984.780 ms, result 0 00:37:45.630 [2024-07-15 07:46:24.090696] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:37:45.630 { 00:37:45.630 "name": "ftl0", 00:37:45.630 "uuid": "d442620d-548b-4e89-8b2c-9e30b59e312d" 00:37:45.630 } 00:37:45.630 07:46:24 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:37:45.630 07:46:24 ftl.ftl_trim -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:37:45.630 07:46:24 ftl.ftl_trim -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:37:45.630 07:46:24 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local i 00:37:45.630 07:46:24 ftl.ftl_trim -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:37:45.630 07:46:24 ftl.ftl_trim -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:37:45.630 07:46:24 ftl.ftl_trim -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:45.888 07:46:24 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:37:46.147 [ 00:37:46.147 { 00:37:46.147 "name": "ftl0", 00:37:46.147 "aliases": [ 00:37:46.147 "d442620d-548b-4e89-8b2c-9e30b59e312d" 00:37:46.147 ], 00:37:46.147 "product_name": "FTL disk", 00:37:46.147 "block_size": 4096, 00:37:46.147 "num_blocks": 23592960, 00:37:46.147 "uuid": "d442620d-548b-4e89-8b2c-9e30b59e312d", 00:37:46.147 "assigned_rate_limits": { 00:37:46.147 "rw_ios_per_sec": 0, 00:37:46.147 "rw_mbytes_per_sec": 0, 00:37:46.147 "r_mbytes_per_sec": 0, 00:37:46.147 "w_mbytes_per_sec": 0 00:37:46.147 }, 00:37:46.147 "claimed": false, 00:37:46.147 "zoned": false, 00:37:46.147 "supported_io_types": { 00:37:46.147 "read": true, 00:37:46.147 "write": true, 00:37:46.147 "unmap": true, 00:37:46.147 "flush": true, 00:37:46.147 "reset": false, 00:37:46.147 "nvme_admin": false, 00:37:46.147 "nvme_io": false, 00:37:46.147 "nvme_io_md": false, 00:37:46.147 "write_zeroes": true, 00:37:46.147 "zcopy": false, 00:37:46.147 "get_zone_info": false, 00:37:46.147 "zone_management": false, 00:37:46.147 "zone_append": false, 00:37:46.147 "compare": false, 00:37:46.147 "compare_and_write": false, 00:37:46.147 "abort": false, 00:37:46.147 "seek_hole": false, 00:37:46.147 "seek_data": false, 00:37:46.147 "copy": false, 00:37:46.147 "nvme_iov_md": false 00:37:46.147 }, 00:37:46.147 "driver_specific": { 00:37:46.147 "ftl": { 00:37:46.147 "base_bdev": "36b33f30-1714-4fa6-9a0b-05330de6dba2", 00:37:46.147 "cache": "nvc0n1p0" 00:37:46.147 } 00:37:46.147 } 00:37:46.147 } 00:37:46.147 ] 00:37:46.147 07:46:24 ftl.ftl_trim -- common/autotest_common.sh@905 -- # return 0 00:37:46.147 07:46:24 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:37:46.147 07:46:24 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:37:46.406 07:46:24 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:37:46.406 07:46:24 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:37:46.664 07:46:25 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:37:46.664 { 00:37:46.664 "name": "ftl0", 00:37:46.664 "aliases": [ 00:37:46.664 "d442620d-548b-4e89-8b2c-9e30b59e312d" 00:37:46.664 ], 00:37:46.664 "product_name": "FTL disk", 00:37:46.664 "block_size": 4096, 00:37:46.664 "num_blocks": 23592960, 00:37:46.664 "uuid": "d442620d-548b-4e89-8b2c-9e30b59e312d", 00:37:46.664 "assigned_rate_limits": { 00:37:46.664 "rw_ios_per_sec": 0, 00:37:46.664 "rw_mbytes_per_sec": 0, 00:37:46.664 "r_mbytes_per_sec": 0, 00:37:46.664 "w_mbytes_per_sec": 0 00:37:46.664 }, 00:37:46.664 "claimed": false, 00:37:46.664 "zoned": false, 00:37:46.664 "supported_io_types": { 00:37:46.664 "read": true, 00:37:46.664 "write": true, 00:37:46.664 "unmap": true, 00:37:46.664 "flush": true, 00:37:46.664 "reset": false, 00:37:46.664 "nvme_admin": false, 00:37:46.664 "nvme_io": false, 00:37:46.664 "nvme_io_md": false, 00:37:46.664 "write_zeroes": true, 00:37:46.664 "zcopy": false, 00:37:46.664 "get_zone_info": false, 00:37:46.664 "zone_management": false, 00:37:46.664 "zone_append": false, 00:37:46.664 "compare": false, 00:37:46.664 "compare_and_write": false, 00:37:46.664 "abort": false, 00:37:46.664 "seek_hole": false, 00:37:46.664 "seek_data": false, 00:37:46.664 "copy": false, 00:37:46.664 "nvme_iov_md": false 00:37:46.664 }, 00:37:46.664 "driver_specific": { 00:37:46.664 "ftl": { 00:37:46.664 "base_bdev": "36b33f30-1714-4fa6-9a0b-05330de6dba2", 00:37:46.664 "cache": "nvc0n1p0" 00:37:46.664 } 00:37:46.664 } 00:37:46.664 } 00:37:46.664 ]' 00:37:46.664 07:46:25 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:37:46.664 07:46:25 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:37:46.664 07:46:25 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:37:46.923 [2024-07-15 07:46:25.434549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:46.923 [2024-07-15 07:46:25.434624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:37:46.923 [2024-07-15 07:46:25.434656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:37:46.923 [2024-07-15 07:46:25.434671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:46.923 [2024-07-15 07:46:25.434726] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:37:46.923 [2024-07-15 07:46:25.438811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:46.923 [2024-07-15 07:46:25.438851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:37:46.923 [2024-07-15 07:46:25.438869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.060 ms 00:37:46.923 [2024-07-15 07:46:25.438890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:46.923 [2024-07-15 07:46:25.439519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:46.923 [2024-07-15 07:46:25.439550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:37:46.923 [2024-07-15 07:46:25.439566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:37:46.923 [2024-07-15 07:46:25.439581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:46.923 [2024-07-15 07:46:25.443219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:46.923 [2024-07-15 07:46:25.443255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:37:46.923 [2024-07-15 07:46:25.443270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.590 ms 00:37:46.923 [2024-07-15 07:46:25.443285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:46.923 [2024-07-15 07:46:25.450611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:46.923 [2024-07-15 07:46:25.450650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:37:46.923 [2024-07-15 07:46:25.450667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.269 ms 00:37:46.923 [2024-07-15 07:46:25.450692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:46.923 [2024-07-15 07:46:25.483001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:46.923 [2024-07-15 07:46:25.483060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:37:46.923 [2024-07-15 07:46:25.483081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.187 ms 00:37:46.923 [2024-07-15 07:46:25.483102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:46.923 [2024-07-15 07:46:25.502358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:46.923 [2024-07-15 07:46:25.502429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:37:46.923 [2024-07-15 07:46:25.502471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.152 ms 00:37:46.923 [2024-07-15 07:46:25.502491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:46.923 [2024-07-15 07:46:25.502788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:46.923 [2024-07-15 07:46:25.502814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:37:46.923 [2024-07-15 07:46:25.502830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:37:46.923 [2024-07-15 07:46:25.502846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:46.923 [2024-07-15 07:46:25.534877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:46.923 [2024-07-15 07:46:25.534936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:37:46.923 [2024-07-15 07:46:25.534962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.993 ms 00:37:46.923 [2024-07-15 07:46:25.534988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:47.183 [2024-07-15 07:46:25.566567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:47.183 [2024-07-15 07:46:25.566628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:37:47.183 [2024-07-15 07:46:25.566647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.481 ms 00:37:47.183 [2024-07-15 07:46:25.566666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:47.183 [2024-07-15 07:46:25.597691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:47.183 [2024-07-15 07:46:25.597741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:37:47.183 [2024-07-15 07:46:25.597759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.926 ms 00:37:47.183 [2024-07-15 07:46:25.597774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:47.183 [2024-07-15 07:46:25.628101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:47.183 [2024-07-15 07:46:25.628154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:37:47.183 [2024-07-15 07:46:25.628173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.171 ms 00:37:47.183 [2024-07-15 07:46:25.628188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:47.183 [2024-07-15 07:46:25.628292] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:37:47.183 [2024-07-15 07:46:25.628325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.628995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.629008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.629035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.629049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.629064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.629077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.629094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.629108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:37:47.183 [2024-07-15 07:46:25.629123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:37:47.184 [2024-07-15 07:46:25.629950] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:37:47.184 [2024-07-15 07:46:25.629962] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d442620d-548b-4e89-8b2c-9e30b59e312d 00:37:47.184 [2024-07-15 07:46:25.629992] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:37:47.184 [2024-07-15 07:46:25.630007] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:37:47.184 [2024-07-15 07:46:25.630026] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:37:47.184 [2024-07-15 07:46:25.630039] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:37:47.184 [2024-07-15 07:46:25.630054] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:37:47.184 [2024-07-15 07:46:25.630067] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:37:47.184 [2024-07-15 07:46:25.630082] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:37:47.184 [2024-07-15 07:46:25.630093] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:37:47.184 [2024-07-15 07:46:25.630108] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:37:47.184 [2024-07-15 07:46:25.630121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:47.184 [2024-07-15 07:46:25.630137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:37:47.184 [2024-07-15 07:46:25.630150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.831 ms 00:37:47.184 [2024-07-15 07:46:25.630166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:47.184 [2024-07-15 07:46:25.648349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:47.184 [2024-07-15 07:46:25.648413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:37:47.184 [2024-07-15 07:46:25.648431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.128 ms 00:37:47.184 [2024-07-15 07:46:25.648451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:47.184 [2024-07-15 07:46:25.649057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:47.184 [2024-07-15 07:46:25.649098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:37:47.184 [2024-07-15 07:46:25.649115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:37:47.184 [2024-07-15 07:46:25.649130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:47.184 [2024-07-15 07:46:25.712153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:47.184 [2024-07-15 07:46:25.712231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:37:47.184 [2024-07-15 07:46:25.712266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:47.184 [2024-07-15 07:46:25.712282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:47.184 [2024-07-15 07:46:25.712454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:47.184 [2024-07-15 07:46:25.712514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:37:47.184 [2024-07-15 07:46:25.712531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:47.184 [2024-07-15 07:46:25.712547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:47.184 [2024-07-15 07:46:25.712645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:47.184 [2024-07-15 07:46:25.712674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:37:47.184 [2024-07-15 07:46:25.712687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:47.184 [2024-07-15 07:46:25.712705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:47.184 [2024-07-15 07:46:25.712745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:47.184 [2024-07-15 07:46:25.712763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:37:47.184 [2024-07-15 07:46:25.712776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:47.184 [2024-07-15 07:46:25.712791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:47.443 [2024-07-15 07:46:25.831112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:47.443 [2024-07-15 07:46:25.831201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:37:47.443 [2024-07-15 07:46:25.831222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:47.443 [2024-07-15 07:46:25.831239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:47.443 [2024-07-15 07:46:25.921733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:47.443 [2024-07-15 07:46:25.921826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:37:47.443 [2024-07-15 07:46:25.921850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:47.443 [2024-07-15 07:46:25.921867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:47.443 [2024-07-15 07:46:25.922021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:47.443 [2024-07-15 07:46:25.922046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:37:47.443 [2024-07-15 07:46:25.922065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:47.443 [2024-07-15 07:46:25.922084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:47.443 [2024-07-15 07:46:25.922153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:47.443 [2024-07-15 07:46:25.922172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:37:47.443 [2024-07-15 07:46:25.922185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:47.443 [2024-07-15 07:46:25.922200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:47.443 [2024-07-15 07:46:25.922376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:47.443 [2024-07-15 07:46:25.922402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:37:47.443 [2024-07-15 07:46:25.922434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:47.443 [2024-07-15 07:46:25.922479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:47.443 [2024-07-15 07:46:25.922586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:47.443 [2024-07-15 07:46:25.922611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:37:47.443 [2024-07-15 07:46:25.922630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:47.443 [2024-07-15 07:46:25.922646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:47.443 [2024-07-15 07:46:25.922717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:47.443 [2024-07-15 07:46:25.922738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:37:47.443 [2024-07-15 07:46:25.922752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:47.443 [2024-07-15 07:46:25.922774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:47.443 [2024-07-15 07:46:25.922850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:47.443 [2024-07-15 07:46:25.922871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:37:47.443 [2024-07-15 07:46:25.922886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:47.443 [2024-07-15 07:46:25.922900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:47.443 [2024-07-15 07:46:25.923193] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 488.628 ms, result 0 00:37:47.443 true 00:37:47.443 07:46:25 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 81012 00:37:47.443 07:46:25 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 81012 ']' 00:37:47.443 07:46:25 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 81012 00:37:47.443 07:46:25 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:37:47.443 07:46:25 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:47.443 07:46:25 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81012 00:37:47.443 killing process with pid 81012 00:37:47.443 07:46:25 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:47.443 07:46:25 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:47.443 07:46:25 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81012' 00:37:47.443 07:46:25 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 81012 00:37:47.443 07:46:25 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 81012 00:37:52.708 07:46:30 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:37:53.640 65536+0 records in 00:37:53.640 65536+0 records out 00:37:53.640 268435456 bytes (268 MB, 256 MiB) copied, 1.27251 s, 211 MB/s 00:37:53.640 07:46:32 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:37:53.640 [2024-07-15 07:46:32.153141] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:37:53.640 [2024-07-15 07:46:32.153315] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81227 ] 00:37:53.898 [2024-07-15 07:46:32.332084] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:54.155 [2024-07-15 07:46:32.648121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:54.719 [2024-07-15 07:46:33.089288] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:37:54.719 [2024-07-15 07:46:33.089426] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:37:54.719 [2024-07-15 07:46:33.265446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:54.719 [2024-07-15 07:46:33.265583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:37:54.719 [2024-07-15 07:46:33.265608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:37:54.719 [2024-07-15 07:46:33.265622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:54.719 [2024-07-15 07:46:33.269911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:54.719 [2024-07-15 07:46:33.269957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:37:54.719 [2024-07-15 07:46:33.269976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.255 ms 00:37:54.719 [2024-07-15 07:46:33.269988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:54.719 [2024-07-15 07:46:33.270138] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:37:54.719 [2024-07-15 07:46:33.271184] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:37:54.719 [2024-07-15 07:46:33.271231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:54.719 [2024-07-15 07:46:33.271248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:37:54.719 [2024-07-15 07:46:33.271262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.104 ms 00:37:54.719 [2024-07-15 07:46:33.271274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:54.719 [2024-07-15 07:46:33.273760] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:37:54.719 [2024-07-15 07:46:33.292081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:54.719 [2024-07-15 07:46:33.292169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:37:54.719 [2024-07-15 07:46:33.292208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.315 ms 00:37:54.719 [2024-07-15 07:46:33.292222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:54.719 [2024-07-15 07:46:33.292520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:54.719 [2024-07-15 07:46:33.292547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:37:54.719 [2024-07-15 07:46:33.292563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:37:54.719 [2024-07-15 07:46:33.292576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:54.719 [2024-07-15 07:46:33.307402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:54.719 [2024-07-15 07:46:33.307532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:37:54.719 [2024-07-15 07:46:33.307582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.737 ms 00:37:54.719 [2024-07-15 07:46:33.307597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:54.719 [2024-07-15 07:46:33.307871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:54.719 [2024-07-15 07:46:33.307895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:37:54.719 [2024-07-15 07:46:33.307910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:37:54.719 [2024-07-15 07:46:33.307922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:54.719 [2024-07-15 07:46:33.307985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:54.719 [2024-07-15 07:46:33.308004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:37:54.719 [2024-07-15 07:46:33.308017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:37:54.719 [2024-07-15 07:46:33.308038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:54.719 [2024-07-15 07:46:33.308088] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:37:54.719 [2024-07-15 07:46:33.314010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:54.719 [2024-07-15 07:46:33.314052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:37:54.719 [2024-07-15 07:46:33.314071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.935 ms 00:37:54.719 [2024-07-15 07:46:33.314083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:54.719 [2024-07-15 07:46:33.314163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:54.719 [2024-07-15 07:46:33.314185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:37:54.719 [2024-07-15 07:46:33.314199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:37:54.719 [2024-07-15 07:46:33.314211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:54.719 [2024-07-15 07:46:33.314247] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:37:54.719 [2024-07-15 07:46:33.314289] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:37:54.719 [2024-07-15 07:46:33.314347] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:37:54.719 [2024-07-15 07:46:33.314371] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:37:54.719 [2024-07-15 07:46:33.314500] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:37:54.719 [2024-07-15 07:46:33.314521] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:37:54.719 [2024-07-15 07:46:33.314537] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:37:54.719 [2024-07-15 07:46:33.314553] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:37:54.719 [2024-07-15 07:46:33.314568] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:37:54.719 [2024-07-15 07:46:33.314581] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:37:54.719 [2024-07-15 07:46:33.314601] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:37:54.719 [2024-07-15 07:46:33.314613] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:37:54.719 [2024-07-15 07:46:33.314625] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:37:54.719 [2024-07-15 07:46:33.314638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:54.719 [2024-07-15 07:46:33.314650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:37:54.719 [2024-07-15 07:46:33.314663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:37:54.719 [2024-07-15 07:46:33.314675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:54.719 [2024-07-15 07:46:33.314773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:54.719 [2024-07-15 07:46:33.314788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:37:54.719 [2024-07-15 07:46:33.314800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:37:54.719 [2024-07-15 07:46:33.314817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:54.719 [2024-07-15 07:46:33.314931] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:37:54.719 [2024-07-15 07:46:33.314955] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:37:54.719 [2024-07-15 07:46:33.314989] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:37:54.719 [2024-07-15 07:46:33.315002] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:54.719 [2024-07-15 07:46:33.315014] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:37:54.719 [2024-07-15 07:46:33.315029] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:37:54.719 [2024-07-15 07:46:33.315040] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:37:54.719 [2024-07-15 07:46:33.315051] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:37:54.719 [2024-07-15 07:46:33.315061] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:37:54.719 [2024-07-15 07:46:33.315072] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:37:54.719 [2024-07-15 07:46:33.315083] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:37:54.719 [2024-07-15 07:46:33.315096] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:37:54.719 [2024-07-15 07:46:33.315107] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:37:54.719 [2024-07-15 07:46:33.315118] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:37:54.719 [2024-07-15 07:46:33.315130] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:37:54.719 [2024-07-15 07:46:33.315151] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:54.719 [2024-07-15 07:46:33.315162] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:37:54.719 [2024-07-15 07:46:33.315173] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:37:54.719 [2024-07-15 07:46:33.315200] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:54.719 [2024-07-15 07:46:33.315211] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:37:54.719 [2024-07-15 07:46:33.315223] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:37:54.719 [2024-07-15 07:46:33.315234] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:54.719 [2024-07-15 07:46:33.315244] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:37:54.719 [2024-07-15 07:46:33.315255] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:37:54.719 [2024-07-15 07:46:33.315266] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:54.719 [2024-07-15 07:46:33.315277] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:37:54.719 [2024-07-15 07:46:33.315290] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:37:54.719 [2024-07-15 07:46:33.315301] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:54.719 [2024-07-15 07:46:33.315313] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:37:54.719 [2024-07-15 07:46:33.315324] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:37:54.719 [2024-07-15 07:46:33.315334] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:54.719 [2024-07-15 07:46:33.315345] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:37:54.719 [2024-07-15 07:46:33.315356] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:37:54.719 [2024-07-15 07:46:33.315367] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:37:54.720 [2024-07-15 07:46:33.315378] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:37:54.720 [2024-07-15 07:46:33.315390] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:37:54.720 [2024-07-15 07:46:33.315401] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:37:54.720 [2024-07-15 07:46:33.315413] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:37:54.720 [2024-07-15 07:46:33.315425] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:37:54.720 [2024-07-15 07:46:33.315435] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:54.720 [2024-07-15 07:46:33.315446] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:37:54.720 [2024-07-15 07:46:33.315474] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:37:54.720 [2024-07-15 07:46:33.315487] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:54.720 [2024-07-15 07:46:33.315506] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:37:54.720 [2024-07-15 07:46:33.315518] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:37:54.720 [2024-07-15 07:46:33.315531] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:37:54.720 [2024-07-15 07:46:33.315543] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:54.720 [2024-07-15 07:46:33.315556] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:37:54.720 [2024-07-15 07:46:33.315567] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:37:54.720 [2024-07-15 07:46:33.315579] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:37:54.720 [2024-07-15 07:46:33.315591] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:37:54.720 [2024-07-15 07:46:33.315602] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:37:54.720 [2024-07-15 07:46:33.315614] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:37:54.720 [2024-07-15 07:46:33.315627] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:37:54.720 [2024-07-15 07:46:33.315648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:54.720 [2024-07-15 07:46:33.315662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:37:54.720 [2024-07-15 07:46:33.315675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:37:54.720 [2024-07-15 07:46:33.315687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:37:54.720 [2024-07-15 07:46:33.315699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:37:54.720 [2024-07-15 07:46:33.315712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:37:54.720 [2024-07-15 07:46:33.315724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:37:54.720 [2024-07-15 07:46:33.315737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:37:54.720 [2024-07-15 07:46:33.315749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:37:54.720 [2024-07-15 07:46:33.315761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:37:54.720 [2024-07-15 07:46:33.315773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:37:54.720 [2024-07-15 07:46:33.315785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:37:54.720 [2024-07-15 07:46:33.315797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:37:54.720 [2024-07-15 07:46:33.315810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:37:54.720 [2024-07-15 07:46:33.315822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:37:54.720 [2024-07-15 07:46:33.315834] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:37:54.720 [2024-07-15 07:46:33.315847] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:54.720 [2024-07-15 07:46:33.315861] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:37:54.720 [2024-07-15 07:46:33.315873] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:37:54.720 [2024-07-15 07:46:33.315886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:37:54.720 [2024-07-15 07:46:33.315898] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:37:54.720 [2024-07-15 07:46:33.315918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:54.720 [2024-07-15 07:46:33.315930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:37:54.720 [2024-07-15 07:46:33.315944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.050 ms 00:37:54.720 [2024-07-15 07:46:33.315955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:54.977 [2024-07-15 07:46:33.379494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:54.977 [2024-07-15 07:46:33.379748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:37:54.977 [2024-07-15 07:46:33.379876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.451 ms 00:37:54.977 [2024-07-15 07:46:33.379933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:54.977 [2024-07-15 07:46:33.380303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:54.977 [2024-07-15 07:46:33.380434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:37:54.977 [2024-07-15 07:46:33.380579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:37:54.977 [2024-07-15 07:46:33.380709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:54.977 [2024-07-15 07:46:33.430509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:54.977 [2024-07-15 07:46:33.430744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:37:54.977 [2024-07-15 07:46:33.430907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.716 ms 00:37:54.977 [2024-07-15 07:46:33.430975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:54.977 [2024-07-15 07:46:33.431245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:54.977 [2024-07-15 07:46:33.431376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:37:54.977 [2024-07-15 07:46:33.431506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:37:54.977 [2024-07-15 07:46:33.431625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:54.977 [2024-07-15 07:46:33.432468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:54.977 [2024-07-15 07:46:33.432596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:37:54.977 [2024-07-15 07:46:33.432707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.737 ms 00:37:54.977 [2024-07-15 07:46:33.432823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:54.977 [2024-07-15 07:46:33.433060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:54.977 [2024-07-15 07:46:33.433118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:37:54.977 [2024-07-15 07:46:33.433220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:37:54.977 [2024-07-15 07:46:33.433355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:54.977 [2024-07-15 07:46:33.454823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:54.977 [2024-07-15 07:46:33.455035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:37:54.977 [2024-07-15 07:46:33.455177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.384 ms 00:37:54.977 [2024-07-15 07:46:33.455230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:54.977 [2024-07-15 07:46:33.474524] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:37:54.977 [2024-07-15 07:46:33.474847] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:37:54.977 [2024-07-15 07:46:33.474892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:54.977 [2024-07-15 07:46:33.474917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:37:54.977 [2024-07-15 07:46:33.474943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.317 ms 00:37:54.977 [2024-07-15 07:46:33.474979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:54.977 [2024-07-15 07:46:33.507600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:54.977 [2024-07-15 07:46:33.507683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:37:54.977 [2024-07-15 07:46:33.507724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.365 ms 00:37:54.977 [2024-07-15 07:46:33.507737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:54.977 [2024-07-15 07:46:33.525512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:54.977 [2024-07-15 07:46:33.525570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:37:54.977 [2024-07-15 07:46:33.525590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.601 ms 00:37:54.977 [2024-07-15 07:46:33.525603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:54.977 [2024-07-15 07:46:33.540972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:54.977 [2024-07-15 07:46:33.541044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:37:54.977 [2024-07-15 07:46:33.541061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.251 ms 00:37:54.977 [2024-07-15 07:46:33.541074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:54.977 [2024-07-15 07:46:33.542102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:54.977 [2024-07-15 07:46:33.542139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:37:54.977 [2024-07-15 07:46:33.542163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.886 ms 00:37:54.977 [2024-07-15 07:46:33.542176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:55.234 [2024-07-15 07:46:33.631298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:55.234 [2024-07-15 07:46:33.631416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:37:55.234 [2024-07-15 07:46:33.631488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.081 ms 00:37:55.234 [2024-07-15 07:46:33.631503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:55.234 [2024-07-15 07:46:33.644572] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:37:55.234 [2024-07-15 07:46:33.673383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:55.234 [2024-07-15 07:46:33.673514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:37:55.234 [2024-07-15 07:46:33.673539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.694 ms 00:37:55.234 [2024-07-15 07:46:33.673552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:55.234 [2024-07-15 07:46:33.673739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:55.234 [2024-07-15 07:46:33.673766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:37:55.234 [2024-07-15 07:46:33.673781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:37:55.234 [2024-07-15 07:46:33.673803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:55.234 [2024-07-15 07:46:33.673898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:55.234 [2024-07-15 07:46:33.673922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:37:55.234 [2024-07-15 07:46:33.673937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:37:55.234 [2024-07-15 07:46:33.673949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:55.234 [2024-07-15 07:46:33.673989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:55.234 [2024-07-15 07:46:33.674004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:37:55.234 [2024-07-15 07:46:33.674018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:37:55.234 [2024-07-15 07:46:33.674030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:55.234 [2024-07-15 07:46:33.674107] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:37:55.234 [2024-07-15 07:46:33.674128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:55.234 [2024-07-15 07:46:33.674142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:37:55.234 [2024-07-15 07:46:33.674156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:37:55.234 [2024-07-15 07:46:33.674170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:55.234 [2024-07-15 07:46:33.708106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:55.234 [2024-07-15 07:46:33.708169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:37:55.234 [2024-07-15 07:46:33.708191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.902 ms 00:37:55.234 [2024-07-15 07:46:33.708219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:55.234 [2024-07-15 07:46:33.708363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:55.234 [2024-07-15 07:46:33.708386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:37:55.234 [2024-07-15 07:46:33.708429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:37:55.234 [2024-07-15 07:46:33.708441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:55.234 [2024-07-15 07:46:33.709949] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:37:55.234 [2024-07-15 07:46:33.714263] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 444.128 ms, result 0 00:37:55.234 [2024-07-15 07:46:33.715191] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:37:55.234 [2024-07-15 07:46:33.731908] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:38:05.476  Copying: 24/256 [MB] (24 MBps) Copying: 47/256 [MB] (23 MBps) Copying: 73/256 [MB] (25 MBps) Copying: 98/256 [MB] (25 MBps) Copying: 123/256 [MB] (24 MBps) Copying: 147/256 [MB] (23 MBps) Copying: 173/256 [MB] (25 MBps) Copying: 200/256 [MB] (27 MBps) Copying: 227/256 [MB] (26 MBps) Copying: 253/256 [MB] (26 MBps) Copying: 256/256 [MB] (average 25 MBps)[2024-07-15 07:46:43.814639] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:38:05.476 [2024-07-15 07:46:43.828194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.476 [2024-07-15 07:46:43.828267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:38:05.476 [2024-07-15 07:46:43.828290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:38:05.476 [2024-07-15 07:46:43.828303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.476 [2024-07-15 07:46:43.828342] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:38:05.476 [2024-07-15 07:46:43.832600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.476 [2024-07-15 07:46:43.832640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:38:05.476 [2024-07-15 07:46:43.832660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.229 ms 00:38:05.476 [2024-07-15 07:46:43.832689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.476 [2024-07-15 07:46:43.834388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.476 [2024-07-15 07:46:43.834418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:38:05.476 [2024-07-15 07:46:43.834434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.656 ms 00:38:05.476 [2024-07-15 07:46:43.834446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.476 [2024-07-15 07:46:43.841575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.476 [2024-07-15 07:46:43.841618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:38:05.476 [2024-07-15 07:46:43.841635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.076 ms 00:38:05.476 [2024-07-15 07:46:43.841649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.476 [2024-07-15 07:46:43.849274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.476 [2024-07-15 07:46:43.849432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:38:05.476 [2024-07-15 07:46:43.849580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.525 ms 00:38:05.476 [2024-07-15 07:46:43.849634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.476 [2024-07-15 07:46:43.883543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.476 [2024-07-15 07:46:43.883814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:38:05.476 [2024-07-15 07:46:43.883944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.723 ms 00:38:05.476 [2024-07-15 07:46:43.883996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.476 [2024-07-15 07:46:43.902933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.476 [2024-07-15 07:46:43.903232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:38:05.476 [2024-07-15 07:46:43.903362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.784 ms 00:38:05.476 [2024-07-15 07:46:43.903416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.476 [2024-07-15 07:46:43.903794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.476 [2024-07-15 07:46:43.903944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:38:05.476 [2024-07-15 07:46:43.904057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:38:05.476 [2024-07-15 07:46:43.904185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.476 [2024-07-15 07:46:43.937973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.476 [2024-07-15 07:46:43.938272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:38:05.476 [2024-07-15 07:46:43.938413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.709 ms 00:38:05.476 [2024-07-15 07:46:43.938550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.476 [2024-07-15 07:46:43.969246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.476 [2024-07-15 07:46:43.969532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:38:05.476 [2024-07-15 07:46:43.969661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.514 ms 00:38:05.476 [2024-07-15 07:46:43.969784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.476 [2024-07-15 07:46:44.001705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.476 [2024-07-15 07:46:44.002022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:38:05.476 [2024-07-15 07:46:44.002150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.769 ms 00:38:05.476 [2024-07-15 07:46:44.002204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.476 [2024-07-15 07:46:44.033105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.476 [2024-07-15 07:46:44.033381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:38:05.476 [2024-07-15 07:46:44.033425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.604 ms 00:38:05.476 [2024-07-15 07:46:44.033440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.476 [2024-07-15 07:46:44.033560] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:38:05.476 [2024-07-15 07:46:44.033594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.033610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.033624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.033637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.033650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.033664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.033677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.033689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.033702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.033715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.033728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.033741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.033754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.033767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.033780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.033792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.033805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.033818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.033830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.033843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.033855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.033867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.033880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.033892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.033905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.033917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.033930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.033943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.033955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.033969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.033982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.033995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.034008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.034021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.034033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.034047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.034060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.034072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.034084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.034097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.034109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.034122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.034135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.034148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.034161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.034173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.034186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.034199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.034212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.034224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.034237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:38:05.477 [2024-07-15 07:46:44.034249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:38:05.478 [2024-07-15 07:46:44.034921] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:38:05.478 [2024-07-15 07:46:44.034945] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d442620d-548b-4e89-8b2c-9e30b59e312d 00:38:05.478 [2024-07-15 07:46:44.034958] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:38:05.478 [2024-07-15 07:46:44.034981] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:38:05.478 [2024-07-15 07:46:44.034992] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:38:05.478 [2024-07-15 07:46:44.035022] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:38:05.478 [2024-07-15 07:46:44.035034] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:38:05.478 [2024-07-15 07:46:44.035047] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:38:05.478 [2024-07-15 07:46:44.035059] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:38:05.478 [2024-07-15 07:46:44.035070] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:38:05.478 [2024-07-15 07:46:44.035080] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:38:05.478 [2024-07-15 07:46:44.035092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.478 [2024-07-15 07:46:44.035105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:38:05.478 [2024-07-15 07:46:44.035118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.535 ms 00:38:05.478 [2024-07-15 07:46:44.035130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.478 [2024-07-15 07:46:44.053020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.478 [2024-07-15 07:46:44.053109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:38:05.478 [2024-07-15 07:46:44.053132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.846 ms 00:38:05.478 [2024-07-15 07:46:44.053146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.478 [2024-07-15 07:46:44.053772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:05.479 [2024-07-15 07:46:44.053798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:38:05.479 [2024-07-15 07:46:44.053815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:38:05.479 [2024-07-15 07:46:44.053837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.770 [2024-07-15 07:46:44.097734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:05.770 [2024-07-15 07:46:44.097830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:05.770 [2024-07-15 07:46:44.097852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:05.770 [2024-07-15 07:46:44.097866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.770 [2024-07-15 07:46:44.098026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:05.770 [2024-07-15 07:46:44.098046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:05.770 [2024-07-15 07:46:44.098059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:05.770 [2024-07-15 07:46:44.098089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.770 [2024-07-15 07:46:44.098178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:05.770 [2024-07-15 07:46:44.098200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:05.770 [2024-07-15 07:46:44.098214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:05.770 [2024-07-15 07:46:44.098226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.770 [2024-07-15 07:46:44.098254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:05.770 [2024-07-15 07:46:44.098270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:05.770 [2024-07-15 07:46:44.098283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:05.770 [2024-07-15 07:46:44.098295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.770 [2024-07-15 07:46:44.215795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:05.770 [2024-07-15 07:46:44.215889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:05.770 [2024-07-15 07:46:44.215912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:05.770 [2024-07-15 07:46:44.215927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.770 [2024-07-15 07:46:44.310749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:05.770 [2024-07-15 07:46:44.310846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:05.770 [2024-07-15 07:46:44.310869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:05.770 [2024-07-15 07:46:44.310884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.770 [2024-07-15 07:46:44.311029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:05.770 [2024-07-15 07:46:44.311048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:05.770 [2024-07-15 07:46:44.311063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:05.770 [2024-07-15 07:46:44.311075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.770 [2024-07-15 07:46:44.311118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:05.770 [2024-07-15 07:46:44.311134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:05.770 [2024-07-15 07:46:44.311148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:05.770 [2024-07-15 07:46:44.311160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.770 [2024-07-15 07:46:44.311311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:05.770 [2024-07-15 07:46:44.311334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:05.770 [2024-07-15 07:46:44.311347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:05.770 [2024-07-15 07:46:44.311360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.770 [2024-07-15 07:46:44.311423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:05.770 [2024-07-15 07:46:44.311444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:38:05.770 [2024-07-15 07:46:44.311495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:05.770 [2024-07-15 07:46:44.311510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.770 [2024-07-15 07:46:44.311571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:05.770 [2024-07-15 07:46:44.311602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:05.770 [2024-07-15 07:46:44.311616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:05.770 [2024-07-15 07:46:44.311628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.770 [2024-07-15 07:46:44.311693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:05.770 [2024-07-15 07:46:44.311711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:05.770 [2024-07-15 07:46:44.311724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:05.770 [2024-07-15 07:46:44.311735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:05.770 [2024-07-15 07:46:44.311959] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 483.772 ms, result 0 00:38:07.142 00:38:07.142 00:38:07.142 07:46:45 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=81362 00:38:07.142 07:46:45 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:38:07.142 07:46:45 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 81362 00:38:07.142 07:46:45 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 81362 ']' 00:38:07.142 07:46:45 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:07.142 07:46:45 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:07.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:07.142 07:46:45 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:07.142 07:46:45 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:07.142 07:46:45 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:38:07.142 [2024-07-15 07:46:45.730737] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:38:07.142 [2024-07-15 07:46:45.730928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81362 ] 00:38:07.399 [2024-07-15 07:46:45.900735] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:07.657 [2024-07-15 07:46:46.176772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:08.588 07:46:47 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:08.588 07:46:47 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:38:08.588 07:46:47 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:38:08.846 [2024-07-15 07:46:47.321534] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:38:08.846 [2024-07-15 07:46:47.321661] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:38:09.105 [2024-07-15 07:46:47.508090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.105 [2024-07-15 07:46:47.508183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:38:09.105 [2024-07-15 07:46:47.508207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:38:09.105 [2024-07-15 07:46:47.508224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.105 [2024-07-15 07:46:47.512159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.105 [2024-07-15 07:46:47.512222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:09.105 [2024-07-15 07:46:47.512242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.905 ms 00:38:09.105 [2024-07-15 07:46:47.512257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.105 [2024-07-15 07:46:47.512445] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:38:09.105 [2024-07-15 07:46:47.513501] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:38:09.105 [2024-07-15 07:46:47.513542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.105 [2024-07-15 07:46:47.513562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:09.105 [2024-07-15 07:46:47.513575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.117 ms 00:38:09.105 [2024-07-15 07:46:47.513590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.105 [2024-07-15 07:46:47.516189] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:38:09.105 [2024-07-15 07:46:47.534471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.105 [2024-07-15 07:46:47.534569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:38:09.105 [2024-07-15 07:46:47.534606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.251 ms 00:38:09.105 [2024-07-15 07:46:47.534623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.105 [2024-07-15 07:46:47.534869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.105 [2024-07-15 07:46:47.534894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:38:09.105 [2024-07-15 07:46:47.534917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:38:09.105 [2024-07-15 07:46:47.534931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.105 [2024-07-15 07:46:47.547737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.105 [2024-07-15 07:46:47.547815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:09.105 [2024-07-15 07:46:47.547857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.627 ms 00:38:09.105 [2024-07-15 07:46:47.547872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.105 [2024-07-15 07:46:47.548179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.105 [2024-07-15 07:46:47.548211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:09.105 [2024-07-15 07:46:47.548236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:38:09.105 [2024-07-15 07:46:47.548251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.105 [2024-07-15 07:46:47.548321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.105 [2024-07-15 07:46:47.548339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:38:09.105 [2024-07-15 07:46:47.548359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:38:09.105 [2024-07-15 07:46:47.548373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.105 [2024-07-15 07:46:47.548423] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:38:09.105 [2024-07-15 07:46:47.554167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.106 [2024-07-15 07:46:47.554216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:09.106 [2024-07-15 07:46:47.554234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.765 ms 00:38:09.106 [2024-07-15 07:46:47.554249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.106 [2024-07-15 07:46:47.554353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.106 [2024-07-15 07:46:47.554382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:38:09.106 [2024-07-15 07:46:47.554396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:38:09.106 [2024-07-15 07:46:47.554415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.106 [2024-07-15 07:46:47.554468] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:38:09.106 [2024-07-15 07:46:47.554509] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:38:09.106 [2024-07-15 07:46:47.554566] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:38:09.106 [2024-07-15 07:46:47.554596] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:38:09.106 [2024-07-15 07:46:47.554706] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:38:09.106 [2024-07-15 07:46:47.554729] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:38:09.106 [2024-07-15 07:46:47.554748] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:38:09.106 [2024-07-15 07:46:47.554768] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:38:09.106 [2024-07-15 07:46:47.554782] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:38:09.106 [2024-07-15 07:46:47.554799] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:38:09.106 [2024-07-15 07:46:47.554811] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:38:09.106 [2024-07-15 07:46:47.554826] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:38:09.106 [2024-07-15 07:46:47.554838] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:38:09.106 [2024-07-15 07:46:47.554858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.106 [2024-07-15 07:46:47.554870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:38:09.106 [2024-07-15 07:46:47.554885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:38:09.106 [2024-07-15 07:46:47.554897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.106 [2024-07-15 07:46:47.555014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.106 [2024-07-15 07:46:47.555032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:38:09.106 [2024-07-15 07:46:47.555048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:38:09.106 [2024-07-15 07:46:47.555060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.106 [2024-07-15 07:46:47.555191] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:38:09.106 [2024-07-15 07:46:47.555212] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:38:09.106 [2024-07-15 07:46:47.555228] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:09.106 [2024-07-15 07:46:47.555241] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:09.106 [2024-07-15 07:46:47.555256] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:38:09.106 [2024-07-15 07:46:47.555266] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:38:09.106 [2024-07-15 07:46:47.555282] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:38:09.106 [2024-07-15 07:46:47.555294] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:38:09.106 [2024-07-15 07:46:47.555311] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:38:09.106 [2024-07-15 07:46:47.555322] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:09.106 [2024-07-15 07:46:47.555335] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:38:09.106 [2024-07-15 07:46:47.555346] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:38:09.106 [2024-07-15 07:46:47.555359] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:09.106 [2024-07-15 07:46:47.555370] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:38:09.106 [2024-07-15 07:46:47.555383] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:38:09.106 [2024-07-15 07:46:47.555394] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:09.106 [2024-07-15 07:46:47.555407] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:38:09.106 [2024-07-15 07:46:47.555420] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:38:09.106 [2024-07-15 07:46:47.555436] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:09.106 [2024-07-15 07:46:47.555449] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:38:09.106 [2024-07-15 07:46:47.555480] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:38:09.106 [2024-07-15 07:46:47.555513] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:09.106 [2024-07-15 07:46:47.555531] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:38:09.106 [2024-07-15 07:46:47.555543] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:38:09.106 [2024-07-15 07:46:47.555559] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:09.106 [2024-07-15 07:46:47.555571] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:38:09.106 [2024-07-15 07:46:47.555584] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:38:09.106 [2024-07-15 07:46:47.555608] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:09.106 [2024-07-15 07:46:47.555623] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:38:09.106 [2024-07-15 07:46:47.555633] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:38:09.106 [2024-07-15 07:46:47.555648] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:09.106 [2024-07-15 07:46:47.555659] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:38:09.106 [2024-07-15 07:46:47.555673] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:38:09.106 [2024-07-15 07:46:47.555683] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:09.106 [2024-07-15 07:46:47.555697] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:38:09.106 [2024-07-15 07:46:47.555708] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:38:09.106 [2024-07-15 07:46:47.555721] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:09.106 [2024-07-15 07:46:47.555731] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:38:09.106 [2024-07-15 07:46:47.555745] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:38:09.106 [2024-07-15 07:46:47.555755] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:09.106 [2024-07-15 07:46:47.555772] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:38:09.106 [2024-07-15 07:46:47.555783] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:38:09.106 [2024-07-15 07:46:47.555796] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:09.106 [2024-07-15 07:46:47.555806] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:38:09.106 [2024-07-15 07:46:47.555825] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:38:09.106 [2024-07-15 07:46:47.555837] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:09.106 [2024-07-15 07:46:47.555851] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:09.106 [2024-07-15 07:46:47.555863] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:38:09.106 [2024-07-15 07:46:47.555877] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:38:09.106 [2024-07-15 07:46:47.555889] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:38:09.106 [2024-07-15 07:46:47.555903] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:38:09.106 [2024-07-15 07:46:47.555914] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:38:09.106 [2024-07-15 07:46:47.555928] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:38:09.106 [2024-07-15 07:46:47.555941] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:38:09.106 [2024-07-15 07:46:47.555959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:09.106 [2024-07-15 07:46:47.555972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:38:09.106 [2024-07-15 07:46:47.555991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:38:09.106 [2024-07-15 07:46:47.556003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:38:09.106 [2024-07-15 07:46:47.556017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:38:09.106 [2024-07-15 07:46:47.556029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:38:09.106 [2024-07-15 07:46:47.556043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:38:09.106 [2024-07-15 07:46:47.556055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:38:09.106 [2024-07-15 07:46:47.556069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:38:09.106 [2024-07-15 07:46:47.556080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:38:09.106 [2024-07-15 07:46:47.556094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:38:09.106 [2024-07-15 07:46:47.556106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:38:09.106 [2024-07-15 07:46:47.556120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:38:09.106 [2024-07-15 07:46:47.556132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:38:09.106 [2024-07-15 07:46:47.556146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:38:09.106 [2024-07-15 07:46:47.556157] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:38:09.106 [2024-07-15 07:46:47.556174] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:09.106 [2024-07-15 07:46:47.556187] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:38:09.106 [2024-07-15 07:46:47.556204] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:38:09.106 [2024-07-15 07:46:47.556216] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:38:09.106 [2024-07-15 07:46:47.556230] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:38:09.107 [2024-07-15 07:46:47.556244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.107 [2024-07-15 07:46:47.556259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:38:09.107 [2024-07-15 07:46:47.556272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.122 ms 00:38:09.107 [2024-07-15 07:46:47.556286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.107 [2024-07-15 07:46:47.604152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.107 [2024-07-15 07:46:47.604242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:09.107 [2024-07-15 07:46:47.604268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.766 ms 00:38:09.107 [2024-07-15 07:46:47.604296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.107 [2024-07-15 07:46:47.604559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.107 [2024-07-15 07:46:47.604593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:38:09.107 [2024-07-15 07:46:47.604610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:38:09.107 [2024-07-15 07:46:47.604629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.107 [2024-07-15 07:46:47.653759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.107 [2024-07-15 07:46:47.653871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:09.107 [2024-07-15 07:46:47.653895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.090 ms 00:38:09.107 [2024-07-15 07:46:47.653912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.107 [2024-07-15 07:46:47.654079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.107 [2024-07-15 07:46:47.654105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:09.107 [2024-07-15 07:46:47.654121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:38:09.107 [2024-07-15 07:46:47.654136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.107 [2024-07-15 07:46:47.654939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.107 [2024-07-15 07:46:47.655002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:09.107 [2024-07-15 07:46:47.655035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.768 ms 00:38:09.107 [2024-07-15 07:46:47.655059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.107 [2024-07-15 07:46:47.655284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.107 [2024-07-15 07:46:47.655310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:09.107 [2024-07-15 07:46:47.655323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:38:09.107 [2024-07-15 07:46:47.655338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.107 [2024-07-15 07:46:47.680163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.107 [2024-07-15 07:46:47.680269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:09.107 [2024-07-15 07:46:47.680295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.787 ms 00:38:09.107 [2024-07-15 07:46:47.680314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.107 [2024-07-15 07:46:47.698812] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:38:09.107 [2024-07-15 07:46:47.698922] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:38:09.107 [2024-07-15 07:46:47.698950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.107 [2024-07-15 07:46:47.698979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:38:09.107 [2024-07-15 07:46:47.699000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.361 ms 00:38:09.107 [2024-07-15 07:46:47.699017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.364 [2024-07-15 07:46:47.730946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.364 [2024-07-15 07:46:47.731087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:38:09.364 [2024-07-15 07:46:47.731112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.718 ms 00:38:09.364 [2024-07-15 07:46:47.731130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.364 [2024-07-15 07:46:47.749608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.364 [2024-07-15 07:46:47.749736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:38:09.365 [2024-07-15 07:46:47.749780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.257 ms 00:38:09.365 [2024-07-15 07:46:47.749802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.365 [2024-07-15 07:46:47.767387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.365 [2024-07-15 07:46:47.767552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:38:09.365 [2024-07-15 07:46:47.767580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.357 ms 00:38:09.365 [2024-07-15 07:46:47.767596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.365 [2024-07-15 07:46:47.768718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.365 [2024-07-15 07:46:47.768754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:38:09.365 [2024-07-15 07:46:47.768771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.889 ms 00:38:09.365 [2024-07-15 07:46:47.768787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.365 [2024-07-15 07:46:47.864726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.365 [2024-07-15 07:46:47.864860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:38:09.365 [2024-07-15 07:46:47.864887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.899 ms 00:38:09.365 [2024-07-15 07:46:47.864905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.365 [2024-07-15 07:46:47.881561] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:38:09.365 [2024-07-15 07:46:47.910472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.365 [2024-07-15 07:46:47.910582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:38:09.365 [2024-07-15 07:46:47.910619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.329 ms 00:38:09.365 [2024-07-15 07:46:47.910639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.365 [2024-07-15 07:46:47.910836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.365 [2024-07-15 07:46:47.910858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:38:09.365 [2024-07-15 07:46:47.910876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:38:09.365 [2024-07-15 07:46:47.910889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.365 [2024-07-15 07:46:47.911000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.365 [2024-07-15 07:46:47.911020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:38:09.365 [2024-07-15 07:46:47.911038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:38:09.365 [2024-07-15 07:46:47.911050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.365 [2024-07-15 07:46:47.911100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.365 [2024-07-15 07:46:47.911116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:38:09.365 [2024-07-15 07:46:47.911136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:38:09.365 [2024-07-15 07:46:47.911148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.365 [2024-07-15 07:46:47.911198] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:38:09.365 [2024-07-15 07:46:47.911216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.365 [2024-07-15 07:46:47.911235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:38:09.365 [2024-07-15 07:46:47.911249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:38:09.365 [2024-07-15 07:46:47.911276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.365 [2024-07-15 07:46:47.945648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.365 [2024-07-15 07:46:47.945730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:38:09.365 [2024-07-15 07:46:47.945754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.332 ms 00:38:09.365 [2024-07-15 07:46:47.945771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.365 [2024-07-15 07:46:47.945992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.365 [2024-07-15 07:46:47.946020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:38:09.365 [2024-07-15 07:46:47.946037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:38:09.365 [2024-07-15 07:46:47.946053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.365 [2024-07-15 07:46:47.947622] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:38:09.365 [2024-07-15 07:46:47.952596] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 439.080 ms, result 0 00:38:09.365 [2024-07-15 07:46:47.953778] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:38:09.627 Some configs were skipped because the RPC state that can call them passed over. 00:38:09.627 07:46:48 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:38:09.627 [2024-07-15 07:46:48.232144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.627 [2024-07-15 07:46:48.232502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:38:09.627 [2024-07-15 07:46:48.232696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.487 ms 00:38:09.627 [2024-07-15 07:46:48.232888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.627 [2024-07-15 07:46:48.233017] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.370 ms, result 0 00:38:09.627 true 00:38:09.884 07:46:48 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:38:09.884 [2024-07-15 07:46:48.476049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:09.884 [2024-07-15 07:46:48.476141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:38:09.884 [2024-07-15 07:46:48.476166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.114 ms 00:38:09.884 [2024-07-15 07:46:48.476183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:09.884 [2024-07-15 07:46:48.476238] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.312 ms, result 0 00:38:09.884 true 00:38:09.884 07:46:48 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 81362 00:38:09.884 07:46:48 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 81362 ']' 00:38:09.884 07:46:48 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 81362 00:38:10.141 07:46:48 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:38:10.141 07:46:48 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:10.141 07:46:48 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81362 00:38:10.141 killing process with pid 81362 00:38:10.141 07:46:48 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:10.141 07:46:48 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:10.141 07:46:48 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81362' 00:38:10.141 07:46:48 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 81362 00:38:10.141 07:46:48 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 81362 00:38:11.075 [2024-07-15 07:46:49.667119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:11.075 [2024-07-15 07:46:49.667218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:38:11.075 [2024-07-15 07:46:49.667247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:38:11.075 [2024-07-15 07:46:49.667261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:11.075 [2024-07-15 07:46:49.667302] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:38:11.075 [2024-07-15 07:46:49.671396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:11.075 [2024-07-15 07:46:49.671469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:38:11.075 [2024-07-15 07:46:49.671490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.067 ms 00:38:11.075 [2024-07-15 07:46:49.671510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:11.075 [2024-07-15 07:46:49.671889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:11.075 [2024-07-15 07:46:49.671921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:38:11.075 [2024-07-15 07:46:49.671937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:38:11.075 [2024-07-15 07:46:49.671951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:11.075 [2024-07-15 07:46:49.676078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:11.075 [2024-07-15 07:46:49.676134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:38:11.075 [2024-07-15 07:46:49.676157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.098 ms 00:38:11.075 [2024-07-15 07:46:49.676173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:11.075 [2024-07-15 07:46:49.683563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:11.075 [2024-07-15 07:46:49.683623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:38:11.075 [2024-07-15 07:46:49.683642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.338 ms 00:38:11.075 [2024-07-15 07:46:49.683661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:11.335 [2024-07-15 07:46:49.697367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:11.335 [2024-07-15 07:46:49.697448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:38:11.335 [2024-07-15 07:46:49.697487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.609 ms 00:38:11.335 [2024-07-15 07:46:49.697508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:11.335 [2024-07-15 07:46:49.706424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:11.335 [2024-07-15 07:46:49.706506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:38:11.335 [2024-07-15 07:46:49.706532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.833 ms 00:38:11.335 [2024-07-15 07:46:49.706549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:11.335 [2024-07-15 07:46:49.706738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:11.335 [2024-07-15 07:46:49.706765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:38:11.335 [2024-07-15 07:46:49.706781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:38:11.335 [2024-07-15 07:46:49.706815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:11.335 [2024-07-15 07:46:49.719583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:11.335 [2024-07-15 07:46:49.719648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:38:11.335 [2024-07-15 07:46:49.719668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.736 ms 00:38:11.335 [2024-07-15 07:46:49.719684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:11.335 [2024-07-15 07:46:49.732010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:11.335 [2024-07-15 07:46:49.732080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:38:11.335 [2024-07-15 07:46:49.732101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.271 ms 00:38:11.335 [2024-07-15 07:46:49.732137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:11.335 [2024-07-15 07:46:49.744308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:11.335 [2024-07-15 07:46:49.744386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:38:11.335 [2024-07-15 07:46:49.744408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.105 ms 00:38:11.335 [2024-07-15 07:46:49.744429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:11.335 [2024-07-15 07:46:49.756425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:11.335 [2024-07-15 07:46:49.756510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:38:11.335 [2024-07-15 07:46:49.756533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.878 ms 00:38:11.335 [2024-07-15 07:46:49.756553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:11.335 [2024-07-15 07:46:49.756608] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:38:11.335 [2024-07-15 07:46:49.756647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:38:11.335 [2024-07-15 07:46:49.756665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:38:11.335 [2024-07-15 07:46:49.756685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:38:11.335 [2024-07-15 07:46:49.756699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:38:11.335 [2024-07-15 07:46:49.756718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:38:11.335 [2024-07-15 07:46:49.756732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:38:11.335 [2024-07-15 07:46:49.756758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:38:11.335 [2024-07-15 07:46:49.756771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:38:11.335 [2024-07-15 07:46:49.756790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:38:11.335 [2024-07-15 07:46:49.756804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:38:11.335 [2024-07-15 07:46:49.756822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:38:11.335 [2024-07-15 07:46:49.756836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:38:11.335 [2024-07-15 07:46:49.756854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:38:11.335 [2024-07-15 07:46:49.756868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:38:11.335 [2024-07-15 07:46:49.756886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:38:11.335 [2024-07-15 07:46:49.756900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:38:11.335 [2024-07-15 07:46:49.756922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:38:11.335 [2024-07-15 07:46:49.756936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:38:11.335 [2024-07-15 07:46:49.756955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:38:11.335 [2024-07-15 07:46:49.756969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:38:11.335 [2024-07-15 07:46:49.756987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:38:11.335 [2024-07-15 07:46:49.757001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.757999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.758012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.758031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.758044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.758062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.758076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.758099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.758114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:38:11.336 [2024-07-15 07:46:49.758133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:38:11.337 [2024-07-15 07:46:49.758147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:38:11.337 [2024-07-15 07:46:49.758164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:38:11.337 [2024-07-15 07:46:49.758178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:38:11.337 [2024-07-15 07:46:49.758196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:38:11.337 [2024-07-15 07:46:49.758209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:38:11.337 [2024-07-15 07:46:49.758227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:38:11.337 [2024-07-15 07:46:49.758240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:38:11.337 [2024-07-15 07:46:49.758266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:38:11.337 [2024-07-15 07:46:49.758279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:38:11.337 [2024-07-15 07:46:49.758297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:38:11.337 [2024-07-15 07:46:49.758311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:38:11.337 [2024-07-15 07:46:49.758340] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:38:11.337 [2024-07-15 07:46:49.758354] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d442620d-548b-4e89-8b2c-9e30b59e312d 00:38:11.337 [2024-07-15 07:46:49.758386] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:38:11.337 [2024-07-15 07:46:49.758399] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:38:11.337 [2024-07-15 07:46:49.758416] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:38:11.337 [2024-07-15 07:46:49.758430] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:38:11.337 [2024-07-15 07:46:49.758447] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:38:11.337 [2024-07-15 07:46:49.758475] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:38:11.337 [2024-07-15 07:46:49.758495] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:38:11.337 [2024-07-15 07:46:49.758506] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:38:11.337 [2024-07-15 07:46:49.758543] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:38:11.337 [2024-07-15 07:46:49.758557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:11.337 [2024-07-15 07:46:49.758576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:38:11.337 [2024-07-15 07:46:49.758590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.951 ms 00:38:11.337 [2024-07-15 07:46:49.758608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:11.337 [2024-07-15 07:46:49.777056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:11.337 [2024-07-15 07:46:49.777182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:38:11.337 [2024-07-15 07:46:49.777208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.361 ms 00:38:11.337 [2024-07-15 07:46:49.777234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:11.337 [2024-07-15 07:46:49.777889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:11.337 [2024-07-15 07:46:49.777929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:38:11.337 [2024-07-15 07:46:49.777953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:38:11.337 [2024-07-15 07:46:49.777980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:11.337 [2024-07-15 07:46:49.837856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:11.337 [2024-07-15 07:46:49.837961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:11.337 [2024-07-15 07:46:49.837986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:11.337 [2024-07-15 07:46:49.838007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:11.337 [2024-07-15 07:46:49.838195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:11.337 [2024-07-15 07:46:49.838221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:11.337 [2024-07-15 07:46:49.838235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:11.337 [2024-07-15 07:46:49.838255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:11.337 [2024-07-15 07:46:49.838343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:11.337 [2024-07-15 07:46:49.838368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:11.337 [2024-07-15 07:46:49.838383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:11.337 [2024-07-15 07:46:49.838401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:11.337 [2024-07-15 07:46:49.838431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:11.337 [2024-07-15 07:46:49.838476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:11.337 [2024-07-15 07:46:49.838494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:11.337 [2024-07-15 07:46:49.838510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:11.595 [2024-07-15 07:46:49.954907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:11.595 [2024-07-15 07:46:49.955025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:11.595 [2024-07-15 07:46:49.955051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:11.595 [2024-07-15 07:46:49.955072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:11.595 [2024-07-15 07:46:50.054540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:11.595 [2024-07-15 07:46:50.054656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:11.595 [2024-07-15 07:46:50.054680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:11.595 [2024-07-15 07:46:50.054702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:11.595 [2024-07-15 07:46:50.054850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:11.595 [2024-07-15 07:46:50.054882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:11.595 [2024-07-15 07:46:50.054899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:11.595 [2024-07-15 07:46:50.054924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:11.595 [2024-07-15 07:46:50.054983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:11.595 [2024-07-15 07:46:50.055018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:11.595 [2024-07-15 07:46:50.055039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:11.595 [2024-07-15 07:46:50.055065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:11.595 [2024-07-15 07:46:50.055261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:11.595 [2024-07-15 07:46:50.055292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:11.595 [2024-07-15 07:46:50.055308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:11.595 [2024-07-15 07:46:50.055326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:11.595 [2024-07-15 07:46:50.055386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:11.595 [2024-07-15 07:46:50.055420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:38:11.595 [2024-07-15 07:46:50.055435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:11.595 [2024-07-15 07:46:50.055476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:11.595 [2024-07-15 07:46:50.055542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:11.595 [2024-07-15 07:46:50.055580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:11.595 [2024-07-15 07:46:50.055594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:11.595 [2024-07-15 07:46:50.055618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:11.595 [2024-07-15 07:46:50.055687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:11.595 [2024-07-15 07:46:50.055716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:11.595 [2024-07-15 07:46:50.055732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:11.595 [2024-07-15 07:46:50.055750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:11.595 [2024-07-15 07:46:50.055967] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 388.818 ms, result 0 00:38:12.970 07:46:51 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:38:12.970 07:46:51 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:38:12.970 [2024-07-15 07:46:51.310711] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:38:12.970 [2024-07-15 07:46:51.310892] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81437 ] 00:38:12.970 [2024-07-15 07:46:51.480894] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:13.228 [2024-07-15 07:46:51.756918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:13.796 [2024-07-15 07:46:52.153039] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:38:13.796 [2024-07-15 07:46:52.153157] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:38:13.796 [2024-07-15 07:46:52.321331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:13.796 [2024-07-15 07:46:52.321431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:38:13.796 [2024-07-15 07:46:52.321481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:38:13.796 [2024-07-15 07:46:52.321499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:13.796 [2024-07-15 07:46:52.325228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:13.796 [2024-07-15 07:46:52.325276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:13.796 [2024-07-15 07:46:52.325295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.695 ms 00:38:13.796 [2024-07-15 07:46:52.325307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:13.796 [2024-07-15 07:46:52.325538] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:38:13.796 [2024-07-15 07:46:52.326569] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:38:13.796 [2024-07-15 07:46:52.326615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:13.796 [2024-07-15 07:46:52.326631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:13.796 [2024-07-15 07:46:52.326644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.089 ms 00:38:13.796 [2024-07-15 07:46:52.326657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:13.796 [2024-07-15 07:46:52.329304] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:38:13.796 [2024-07-15 07:46:52.347217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:13.796 [2024-07-15 07:46:52.347300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:38:13.796 [2024-07-15 07:46:52.347332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.912 ms 00:38:13.796 [2024-07-15 07:46:52.347346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:13.796 [2024-07-15 07:46:52.347577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:13.796 [2024-07-15 07:46:52.347602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:38:13.796 [2024-07-15 07:46:52.347618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:38:13.796 [2024-07-15 07:46:52.347631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:13.796 [2024-07-15 07:46:52.360513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:13.796 [2024-07-15 07:46:52.360600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:13.796 [2024-07-15 07:46:52.360623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.805 ms 00:38:13.796 [2024-07-15 07:46:52.360636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:13.796 [2024-07-15 07:46:52.360871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:13.796 [2024-07-15 07:46:52.360898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:13.796 [2024-07-15 07:46:52.360913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:38:13.796 [2024-07-15 07:46:52.360926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:13.796 [2024-07-15 07:46:52.360983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:13.796 [2024-07-15 07:46:52.361003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:38:13.796 [2024-07-15 07:46:52.361016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:38:13.796 [2024-07-15 07:46:52.361034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:13.796 [2024-07-15 07:46:52.361081] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:38:13.796 [2024-07-15 07:46:52.366960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:13.796 [2024-07-15 07:46:52.367011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:13.796 [2024-07-15 07:46:52.367029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.893 ms 00:38:13.796 [2024-07-15 07:46:52.367042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:13.796 [2024-07-15 07:46:52.367131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:13.796 [2024-07-15 07:46:52.367150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:38:13.796 [2024-07-15 07:46:52.367164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:38:13.796 [2024-07-15 07:46:52.367176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:13.796 [2024-07-15 07:46:52.367214] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:38:13.796 [2024-07-15 07:46:52.367250] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:38:13.796 [2024-07-15 07:46:52.367302] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:38:13.796 [2024-07-15 07:46:52.367324] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:38:13.796 [2024-07-15 07:46:52.367433] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:38:13.796 [2024-07-15 07:46:52.367472] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:38:13.796 [2024-07-15 07:46:52.367493] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:38:13.796 [2024-07-15 07:46:52.367517] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:38:13.796 [2024-07-15 07:46:52.367532] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:38:13.796 [2024-07-15 07:46:52.367546] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:38:13.796 [2024-07-15 07:46:52.367564] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:38:13.796 [2024-07-15 07:46:52.367577] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:38:13.796 [2024-07-15 07:46:52.367589] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:38:13.796 [2024-07-15 07:46:52.367603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:13.796 [2024-07-15 07:46:52.367616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:38:13.796 [2024-07-15 07:46:52.367629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:38:13.796 [2024-07-15 07:46:52.367641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:13.796 [2024-07-15 07:46:52.367743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:13.796 [2024-07-15 07:46:52.367762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:38:13.796 [2024-07-15 07:46:52.367775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:38:13.796 [2024-07-15 07:46:52.367793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:13.796 [2024-07-15 07:46:52.367912] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:38:13.796 [2024-07-15 07:46:52.367932] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:38:13.796 [2024-07-15 07:46:52.367945] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:13.796 [2024-07-15 07:46:52.367958] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:13.796 [2024-07-15 07:46:52.367969] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:38:13.796 [2024-07-15 07:46:52.367981] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:38:13.796 [2024-07-15 07:46:52.367992] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:38:13.796 [2024-07-15 07:46:52.368002] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:38:13.796 [2024-07-15 07:46:52.368013] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:38:13.796 [2024-07-15 07:46:52.368023] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:13.796 [2024-07-15 07:46:52.368034] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:38:13.796 [2024-07-15 07:46:52.368044] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:38:13.796 [2024-07-15 07:46:52.368054] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:13.796 [2024-07-15 07:46:52.368064] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:38:13.796 [2024-07-15 07:46:52.368075] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:38:13.796 [2024-07-15 07:46:52.368085] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:13.796 [2024-07-15 07:46:52.368095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:38:13.796 [2024-07-15 07:46:52.368108] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:38:13.796 [2024-07-15 07:46:52.368138] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:13.796 [2024-07-15 07:46:52.368150] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:38:13.796 [2024-07-15 07:46:52.368161] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:38:13.796 [2024-07-15 07:46:52.368172] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:13.796 [2024-07-15 07:46:52.368183] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:38:13.796 [2024-07-15 07:46:52.368193] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:38:13.797 [2024-07-15 07:46:52.368204] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:13.797 [2024-07-15 07:46:52.368214] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:38:13.797 [2024-07-15 07:46:52.368225] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:38:13.797 [2024-07-15 07:46:52.368236] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:13.797 [2024-07-15 07:46:52.368247] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:38:13.797 [2024-07-15 07:46:52.368257] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:38:13.797 [2024-07-15 07:46:52.368268] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:13.797 [2024-07-15 07:46:52.368278] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:38:13.797 [2024-07-15 07:46:52.368289] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:38:13.797 [2024-07-15 07:46:52.368300] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:13.797 [2024-07-15 07:46:52.368311] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:38:13.797 [2024-07-15 07:46:52.368321] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:38:13.797 [2024-07-15 07:46:52.368332] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:13.797 [2024-07-15 07:46:52.368342] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:38:13.797 [2024-07-15 07:46:52.368353] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:38:13.797 [2024-07-15 07:46:52.368363] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:13.797 [2024-07-15 07:46:52.368374] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:38:13.797 [2024-07-15 07:46:52.368384] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:38:13.797 [2024-07-15 07:46:52.368395] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:13.797 [2024-07-15 07:46:52.368405] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:38:13.797 [2024-07-15 07:46:52.368416] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:38:13.797 [2024-07-15 07:46:52.368427] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:13.797 [2024-07-15 07:46:52.368439] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:13.797 [2024-07-15 07:46:52.368465] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:38:13.797 [2024-07-15 07:46:52.368480] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:38:13.797 [2024-07-15 07:46:52.368493] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:38:13.797 [2024-07-15 07:46:52.368504] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:38:13.797 [2024-07-15 07:46:52.368515] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:38:13.797 [2024-07-15 07:46:52.368527] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:38:13.797 [2024-07-15 07:46:52.368539] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:38:13.797 [2024-07-15 07:46:52.368560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:13.797 [2024-07-15 07:46:52.368574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:38:13.797 [2024-07-15 07:46:52.368586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:38:13.797 [2024-07-15 07:46:52.368598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:38:13.797 [2024-07-15 07:46:52.368610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:38:13.797 [2024-07-15 07:46:52.368621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:38:13.797 [2024-07-15 07:46:52.368632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:38:13.797 [2024-07-15 07:46:52.368643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:38:13.797 [2024-07-15 07:46:52.368655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:38:13.797 [2024-07-15 07:46:52.368666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:38:13.797 [2024-07-15 07:46:52.368678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:38:13.797 [2024-07-15 07:46:52.368689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:38:13.797 [2024-07-15 07:46:52.368700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:38:13.797 [2024-07-15 07:46:52.368711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:38:13.797 [2024-07-15 07:46:52.368723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:38:13.797 [2024-07-15 07:46:52.368735] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:38:13.797 [2024-07-15 07:46:52.368749] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:13.797 [2024-07-15 07:46:52.368761] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:38:13.797 [2024-07-15 07:46:52.368773] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:38:13.797 [2024-07-15 07:46:52.368785] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:38:13.797 [2024-07-15 07:46:52.368797] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:38:13.797 [2024-07-15 07:46:52.368810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:13.797 [2024-07-15 07:46:52.368823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:38:13.797 [2024-07-15 07:46:52.368836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.963 ms 00:38:13.797 [2024-07-15 07:46:52.368847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.055 [2024-07-15 07:46:52.430636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.055 [2024-07-15 07:46:52.430728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:14.055 [2024-07-15 07:46:52.430755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.695 ms 00:38:14.055 [2024-07-15 07:46:52.430770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.055 [2024-07-15 07:46:52.431041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.055 [2024-07-15 07:46:52.431066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:38:14.055 [2024-07-15 07:46:52.431082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:38:14.055 [2024-07-15 07:46:52.431105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.055 [2024-07-15 07:46:52.479342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.055 [2024-07-15 07:46:52.479420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:14.055 [2024-07-15 07:46:52.479444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.195 ms 00:38:14.055 [2024-07-15 07:46:52.479475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.055 [2024-07-15 07:46:52.479651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.055 [2024-07-15 07:46:52.479673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:14.055 [2024-07-15 07:46:52.479687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:38:14.055 [2024-07-15 07:46:52.479699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.055 [2024-07-15 07:46:52.480484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.055 [2024-07-15 07:46:52.480511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:14.055 [2024-07-15 07:46:52.480526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.749 ms 00:38:14.055 [2024-07-15 07:46:52.480538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.055 [2024-07-15 07:46:52.480747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.055 [2024-07-15 07:46:52.480774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:14.055 [2024-07-15 07:46:52.480788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:38:14.055 [2024-07-15 07:46:52.480801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.055 [2024-07-15 07:46:52.502039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.055 [2024-07-15 07:46:52.502115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:14.055 [2024-07-15 07:46:52.502139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.200 ms 00:38:14.055 [2024-07-15 07:46:52.502152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.055 [2024-07-15 07:46:52.520815] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:38:14.055 [2024-07-15 07:46:52.520900] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:38:14.055 [2024-07-15 07:46:52.520927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.055 [2024-07-15 07:46:52.520942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:38:14.055 [2024-07-15 07:46:52.520960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.534 ms 00:38:14.055 [2024-07-15 07:46:52.520973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.055 [2024-07-15 07:46:52.551649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.055 [2024-07-15 07:46:52.551786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:38:14.055 [2024-07-15 07:46:52.551813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.489 ms 00:38:14.055 [2024-07-15 07:46:52.551827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.055 [2024-07-15 07:46:52.570810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.055 [2024-07-15 07:46:52.570898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:38:14.055 [2024-07-15 07:46:52.570922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.746 ms 00:38:14.055 [2024-07-15 07:46:52.570934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.055 [2024-07-15 07:46:52.587435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.055 [2024-07-15 07:46:52.587537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:38:14.055 [2024-07-15 07:46:52.587561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.311 ms 00:38:14.055 [2024-07-15 07:46:52.587574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.055 [2024-07-15 07:46:52.588730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.055 [2024-07-15 07:46:52.588768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:38:14.056 [2024-07-15 07:46:52.588785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.910 ms 00:38:14.056 [2024-07-15 07:46:52.588797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.314 [2024-07-15 07:46:52.677264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.314 [2024-07-15 07:46:52.677393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:38:14.314 [2024-07-15 07:46:52.677417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.420 ms 00:38:14.314 [2024-07-15 07:46:52.677430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.314 [2024-07-15 07:46:52.694229] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:38:14.314 [2024-07-15 07:46:52.721861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.314 [2024-07-15 07:46:52.721951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:38:14.314 [2024-07-15 07:46:52.721979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.174 ms 00:38:14.314 [2024-07-15 07:46:52.721992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.314 [2024-07-15 07:46:52.722183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.314 [2024-07-15 07:46:52.722207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:38:14.314 [2024-07-15 07:46:52.722229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:38:14.314 [2024-07-15 07:46:52.722241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.314 [2024-07-15 07:46:52.722337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.314 [2024-07-15 07:46:52.722358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:38:14.314 [2024-07-15 07:46:52.722372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:38:14.314 [2024-07-15 07:46:52.722384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.314 [2024-07-15 07:46:52.722424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.314 [2024-07-15 07:46:52.722443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:38:14.314 [2024-07-15 07:46:52.722487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:38:14.314 [2024-07-15 07:46:52.722508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.314 [2024-07-15 07:46:52.722567] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:38:14.314 [2024-07-15 07:46:52.722587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.314 [2024-07-15 07:46:52.722601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:38:14.314 [2024-07-15 07:46:52.722615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:38:14.314 [2024-07-15 07:46:52.722627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.314 [2024-07-15 07:46:52.756147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.314 [2024-07-15 07:46:52.756276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:38:14.314 [2024-07-15 07:46:52.756325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.477 ms 00:38:14.314 [2024-07-15 07:46:52.756339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.314 [2024-07-15 07:46:52.756641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:14.314 [2024-07-15 07:46:52.756666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:38:14.314 [2024-07-15 07:46:52.756683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:38:14.314 [2024-07-15 07:46:52.756696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:14.314 [2024-07-15 07:46:52.758291] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:38:14.314 [2024-07-15 07:46:52.764509] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 436.513 ms, result 0 00:38:14.314 [2024-07-15 07:46:52.765646] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:38:14.314 [2024-07-15 07:46:52.783317] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:38:24.520  Copying: 27/256 [MB] (27 MBps) Copying: 52/256 [MB] (24 MBps) Copying: 77/256 [MB] (24 MBps) Copying: 101/256 [MB] (24 MBps) Copying: 127/256 [MB] (25 MBps) Copying: 153/256 [MB] (25 MBps) Copying: 178/256 [MB] (25 MBps) Copying: 202/256 [MB] (24 MBps) Copying: 227/256 [MB] (24 MBps) Copying: 252/256 [MB] (24 MBps) Copying: 256/256 [MB] (average 25 MBps)[2024-07-15 07:47:02.943678] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:38:24.521 [2024-07-15 07:47:02.956546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.521 [2024-07-15 07:47:02.956603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:38:24.521 [2024-07-15 07:47:02.956624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:38:24.521 [2024-07-15 07:47:02.956637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.521 [2024-07-15 07:47:02.956669] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:38:24.521 [2024-07-15 07:47:02.960646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.521 [2024-07-15 07:47:02.960685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:38:24.521 [2024-07-15 07:47:02.960700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.956 ms 00:38:24.521 [2024-07-15 07:47:02.960712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.521 [2024-07-15 07:47:02.961004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.521 [2024-07-15 07:47:02.961023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:38:24.521 [2024-07-15 07:47:02.961043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:38:24.521 [2024-07-15 07:47:02.961054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.521 [2024-07-15 07:47:02.964704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.521 [2024-07-15 07:47:02.964732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:38:24.521 [2024-07-15 07:47:02.964763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.628 ms 00:38:24.521 [2024-07-15 07:47:02.964782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.521 [2024-07-15 07:47:02.972005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.521 [2024-07-15 07:47:02.972056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:38:24.521 [2024-07-15 07:47:02.972072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.197 ms 00:38:24.521 [2024-07-15 07:47:02.972084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.521 [2024-07-15 07:47:03.001781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.521 [2024-07-15 07:47:03.001837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:38:24.521 [2024-07-15 07:47:03.001855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.615 ms 00:38:24.521 [2024-07-15 07:47:03.001867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.521 [2024-07-15 07:47:03.020353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.521 [2024-07-15 07:47:03.020394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:38:24.521 [2024-07-15 07:47:03.020412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.403 ms 00:38:24.521 [2024-07-15 07:47:03.020424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.521 [2024-07-15 07:47:03.020630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.521 [2024-07-15 07:47:03.020653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:38:24.521 [2024-07-15 07:47:03.020668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:38:24.521 [2024-07-15 07:47:03.020680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.521 [2024-07-15 07:47:03.052711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.521 [2024-07-15 07:47:03.052785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:38:24.521 [2024-07-15 07:47:03.052803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.005 ms 00:38:24.521 [2024-07-15 07:47:03.052815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.521 [2024-07-15 07:47:03.083531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.521 [2024-07-15 07:47:03.083574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:38:24.521 [2024-07-15 07:47:03.083591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.604 ms 00:38:24.521 [2024-07-15 07:47:03.083603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.521 [2024-07-15 07:47:03.113901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.521 [2024-07-15 07:47:03.113972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:38:24.521 [2024-07-15 07:47:03.113988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.228 ms 00:38:24.521 [2024-07-15 07:47:03.113999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.781 [2024-07-15 07:47:03.144190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.781 [2024-07-15 07:47:03.144244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:38:24.781 [2024-07-15 07:47:03.144276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.088 ms 00:38:24.781 [2024-07-15 07:47:03.144288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.781 [2024-07-15 07:47:03.144356] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:38:24.781 [2024-07-15 07:47:03.144384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.144998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.145009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.145022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.145035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.145047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.145059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.145071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.145083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.145095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.145107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.145119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.145131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.145143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.145155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.145166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.145178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.145190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.145203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.145217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.145248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.145261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.145274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.145286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.145299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.145311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.145324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.145337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.145349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.145361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.145373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.145385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:38:24.781 [2024-07-15 07:47:03.145398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:38:24.782 [2024-07-15 07:47:03.145411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:38:24.782 [2024-07-15 07:47:03.145423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:38:24.782 [2024-07-15 07:47:03.145436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:38:24.782 [2024-07-15 07:47:03.145448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:38:24.782 [2024-07-15 07:47:03.145461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:38:24.782 [2024-07-15 07:47:03.145473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:38:24.782 [2024-07-15 07:47:03.145487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:38:24.782 [2024-07-15 07:47:03.145499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:38:24.782 [2024-07-15 07:47:03.145523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:38:24.782 [2024-07-15 07:47:03.145546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:38:24.782 [2024-07-15 07:47:03.145559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:38:24.782 [2024-07-15 07:47:03.145571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:38:24.782 [2024-07-15 07:47:03.145584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:38:24.782 [2024-07-15 07:47:03.145596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:38:24.782 [2024-07-15 07:47:03.145609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:38:24.782 [2024-07-15 07:47:03.145622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:38:24.782 [2024-07-15 07:47:03.145634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:38:24.782 [2024-07-15 07:47:03.145647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:38:24.782 [2024-07-15 07:47:03.145661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:38:24.782 [2024-07-15 07:47:03.145674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:38:24.782 [2024-07-15 07:47:03.145687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:38:24.782 [2024-07-15 07:47:03.145700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:38:24.782 [2024-07-15 07:47:03.145713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:38:24.782 [2024-07-15 07:47:03.145725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:38:24.782 [2024-07-15 07:47:03.145738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:38:24.782 [2024-07-15 07:47:03.145751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:38:24.782 [2024-07-15 07:47:03.145763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:38:24.782 [2024-07-15 07:47:03.145786] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:38:24.782 [2024-07-15 07:47:03.145798] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d442620d-548b-4e89-8b2c-9e30b59e312d 00:38:24.782 [2024-07-15 07:47:03.145811] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:38:24.782 [2024-07-15 07:47:03.145823] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:38:24.782 [2024-07-15 07:47:03.145850] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:38:24.782 [2024-07-15 07:47:03.145877] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:38:24.782 [2024-07-15 07:47:03.145903] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:38:24.782 [2024-07-15 07:47:03.145915] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:38:24.782 [2024-07-15 07:47:03.145927] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:38:24.782 [2024-07-15 07:47:03.145937] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:38:24.782 [2024-07-15 07:47:03.145947] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:38:24.782 [2024-07-15 07:47:03.145958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.782 [2024-07-15 07:47:03.145971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:38:24.782 [2024-07-15 07:47:03.145983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.606 ms 00:38:24.782 [2024-07-15 07:47:03.145999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.782 [2024-07-15 07:47:03.163833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.782 [2024-07-15 07:47:03.163883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:38:24.782 [2024-07-15 07:47:03.163901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.805 ms 00:38:24.782 [2024-07-15 07:47:03.163914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.782 [2024-07-15 07:47:03.164483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:24.782 [2024-07-15 07:47:03.164514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:38:24.782 [2024-07-15 07:47:03.164537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.501 ms 00:38:24.782 [2024-07-15 07:47:03.164562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.782 [2024-07-15 07:47:03.208875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:24.782 [2024-07-15 07:47:03.208959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:24.782 [2024-07-15 07:47:03.208978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:24.782 [2024-07-15 07:47:03.208990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.782 [2024-07-15 07:47:03.209135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:24.782 [2024-07-15 07:47:03.209154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:24.782 [2024-07-15 07:47:03.209176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:24.782 [2024-07-15 07:47:03.209188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.782 [2024-07-15 07:47:03.209262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:24.782 [2024-07-15 07:47:03.209282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:24.782 [2024-07-15 07:47:03.209296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:24.782 [2024-07-15 07:47:03.209308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.782 [2024-07-15 07:47:03.209358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:24.782 [2024-07-15 07:47:03.209381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:24.783 [2024-07-15 07:47:03.209398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:24.783 [2024-07-15 07:47:03.209417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:24.783 [2024-07-15 07:47:03.322472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:24.783 [2024-07-15 07:47:03.322562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:24.783 [2024-07-15 07:47:03.322583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:24.783 [2024-07-15 07:47:03.322595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.041 [2024-07-15 07:47:03.416930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:25.041 [2024-07-15 07:47:03.417020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:25.041 [2024-07-15 07:47:03.417040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:25.041 [2024-07-15 07:47:03.417096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.041 [2024-07-15 07:47:03.417200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:25.041 [2024-07-15 07:47:03.417218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:25.041 [2024-07-15 07:47:03.417231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:25.041 [2024-07-15 07:47:03.417243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.041 [2024-07-15 07:47:03.417282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:25.041 [2024-07-15 07:47:03.417297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:25.041 [2024-07-15 07:47:03.417309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:25.041 [2024-07-15 07:47:03.417321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.041 [2024-07-15 07:47:03.417479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:25.041 [2024-07-15 07:47:03.417515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:25.041 [2024-07-15 07:47:03.417529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:25.041 [2024-07-15 07:47:03.417541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.041 [2024-07-15 07:47:03.417595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:25.041 [2024-07-15 07:47:03.417614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:38:25.041 [2024-07-15 07:47:03.417627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:25.041 [2024-07-15 07:47:03.417638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.041 [2024-07-15 07:47:03.417700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:25.041 [2024-07-15 07:47:03.417717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:25.041 [2024-07-15 07:47:03.417730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:25.041 [2024-07-15 07:47:03.417741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.041 [2024-07-15 07:47:03.417805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:25.041 [2024-07-15 07:47:03.417823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:25.041 [2024-07-15 07:47:03.417835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:25.041 [2024-07-15 07:47:03.417846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.041 [2024-07-15 07:47:03.418078] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 461.528 ms, result 0 00:38:26.414 00:38:26.414 00:38:26.414 07:47:04 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:38:26.414 07:47:04 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:38:26.672 07:47:05 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:38:26.931 [2024-07-15 07:47:05.385711] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:38:26.931 [2024-07-15 07:47:05.385915] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81575 ] 00:38:27.189 [2024-07-15 07:47:05.565834] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:27.448 [2024-07-15 07:47:05.838135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:27.706 [2024-07-15 07:47:06.226261] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:38:27.706 [2024-07-15 07:47:06.226374] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:38:27.968 [2024-07-15 07:47:06.395269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:27.968 [2024-07-15 07:47:06.395358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:38:27.968 [2024-07-15 07:47:06.395381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:38:27.968 [2024-07-15 07:47:06.395394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:27.968 [2024-07-15 07:47:06.400004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:27.968 [2024-07-15 07:47:06.400064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:27.968 [2024-07-15 07:47:06.400082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.579 ms 00:38:27.968 [2024-07-15 07:47:06.400094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:27.968 [2024-07-15 07:47:06.400250] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:38:27.968 [2024-07-15 07:47:06.401255] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:38:27.968 [2024-07-15 07:47:06.401305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:27.968 [2024-07-15 07:47:06.401321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:27.968 [2024-07-15 07:47:06.401334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.065 ms 00:38:27.968 [2024-07-15 07:47:06.401347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:27.968 [2024-07-15 07:47:06.404087] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:38:27.968 [2024-07-15 07:47:06.422316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:27.968 [2024-07-15 07:47:06.422390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:38:27.968 [2024-07-15 07:47:06.422419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.228 ms 00:38:27.968 [2024-07-15 07:47:06.422433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:27.968 [2024-07-15 07:47:06.422647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:27.968 [2024-07-15 07:47:06.422671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:38:27.968 [2024-07-15 07:47:06.422686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:38:27.968 [2024-07-15 07:47:06.422704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:27.968 [2024-07-15 07:47:06.435621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:27.968 [2024-07-15 07:47:06.435687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:27.968 [2024-07-15 07:47:06.435709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.841 ms 00:38:27.968 [2024-07-15 07:47:06.435722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:27.968 [2024-07-15 07:47:06.435953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:27.968 [2024-07-15 07:47:06.435978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:27.968 [2024-07-15 07:47:06.435993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:38:27.968 [2024-07-15 07:47:06.436006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:27.968 [2024-07-15 07:47:06.436057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:27.968 [2024-07-15 07:47:06.436075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:38:27.968 [2024-07-15 07:47:06.436089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:38:27.968 [2024-07-15 07:47:06.436106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:27.968 [2024-07-15 07:47:06.436149] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:38:27.968 [2024-07-15 07:47:06.441921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:27.968 [2024-07-15 07:47:06.441960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:27.968 [2024-07-15 07:47:06.441976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.784 ms 00:38:27.968 [2024-07-15 07:47:06.441989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:27.968 [2024-07-15 07:47:06.442066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:27.968 [2024-07-15 07:47:06.442085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:38:27.968 [2024-07-15 07:47:06.442100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:38:27.968 [2024-07-15 07:47:06.442112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:27.968 [2024-07-15 07:47:06.442149] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:38:27.968 [2024-07-15 07:47:06.442184] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:38:27.968 [2024-07-15 07:47:06.442238] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:38:27.968 [2024-07-15 07:47:06.442261] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:38:27.968 [2024-07-15 07:47:06.442369] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:38:27.968 [2024-07-15 07:47:06.442397] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:38:27.968 [2024-07-15 07:47:06.442413] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:38:27.968 [2024-07-15 07:47:06.442430] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:38:27.968 [2024-07-15 07:47:06.442445] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:38:27.968 [2024-07-15 07:47:06.442476] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:38:27.968 [2024-07-15 07:47:06.442494] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:38:27.968 [2024-07-15 07:47:06.442506] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:38:27.968 [2024-07-15 07:47:06.442518] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:38:27.968 [2024-07-15 07:47:06.442531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:27.968 [2024-07-15 07:47:06.442544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:38:27.968 [2024-07-15 07:47:06.442557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:38:27.968 [2024-07-15 07:47:06.442568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:27.968 [2024-07-15 07:47:06.442678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:27.968 [2024-07-15 07:47:06.442694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:38:27.968 [2024-07-15 07:47:06.442707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:38:27.968 [2024-07-15 07:47:06.442725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:27.968 [2024-07-15 07:47:06.442840] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:38:27.968 [2024-07-15 07:47:06.442858] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:38:27.968 [2024-07-15 07:47:06.442872] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:27.968 [2024-07-15 07:47:06.442885] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:27.968 [2024-07-15 07:47:06.442897] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:38:27.968 [2024-07-15 07:47:06.442908] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:38:27.968 [2024-07-15 07:47:06.442919] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:38:27.968 [2024-07-15 07:47:06.442931] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:38:27.968 [2024-07-15 07:47:06.442943] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:38:27.968 [2024-07-15 07:47:06.442955] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:27.968 [2024-07-15 07:47:06.442966] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:38:27.968 [2024-07-15 07:47:06.442998] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:38:27.968 [2024-07-15 07:47:06.443010] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:27.968 [2024-07-15 07:47:06.443021] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:38:27.969 [2024-07-15 07:47:06.443033] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:38:27.969 [2024-07-15 07:47:06.443045] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:27.969 [2024-07-15 07:47:06.443056] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:38:27.969 [2024-07-15 07:47:06.443067] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:38:27.969 [2024-07-15 07:47:06.443095] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:27.969 [2024-07-15 07:47:06.443108] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:38:27.969 [2024-07-15 07:47:06.443120] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:38:27.969 [2024-07-15 07:47:06.443131] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:27.969 [2024-07-15 07:47:06.443142] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:38:27.969 [2024-07-15 07:47:06.443153] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:38:27.969 [2024-07-15 07:47:06.443164] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:27.969 [2024-07-15 07:47:06.443175] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:38:27.969 [2024-07-15 07:47:06.443186] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:38:27.969 [2024-07-15 07:47:06.443197] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:27.969 [2024-07-15 07:47:06.443208] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:38:27.969 [2024-07-15 07:47:06.443219] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:38:27.969 [2024-07-15 07:47:06.443230] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:27.969 [2024-07-15 07:47:06.443240] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:38:27.969 [2024-07-15 07:47:06.443251] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:38:27.969 [2024-07-15 07:47:06.443263] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:27.969 [2024-07-15 07:47:06.443275] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:38:27.969 [2024-07-15 07:47:06.443286] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:38:27.969 [2024-07-15 07:47:06.443297] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:27.969 [2024-07-15 07:47:06.443308] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:38:27.969 [2024-07-15 07:47:06.443319] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:38:27.969 [2024-07-15 07:47:06.443331] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:27.969 [2024-07-15 07:47:06.443343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:38:27.969 [2024-07-15 07:47:06.443354] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:38:27.969 [2024-07-15 07:47:06.443365] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:27.969 [2024-07-15 07:47:06.443375] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:38:27.969 [2024-07-15 07:47:06.443387] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:38:27.969 [2024-07-15 07:47:06.443399] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:27.969 [2024-07-15 07:47:06.443412] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:27.969 [2024-07-15 07:47:06.443424] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:38:27.969 [2024-07-15 07:47:06.443435] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:38:27.969 [2024-07-15 07:47:06.443447] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:38:27.969 [2024-07-15 07:47:06.443482] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:38:27.969 [2024-07-15 07:47:06.443496] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:38:27.969 [2024-07-15 07:47:06.443508] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:38:27.969 [2024-07-15 07:47:06.443521] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:38:27.969 [2024-07-15 07:47:06.443543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:27.969 [2024-07-15 07:47:06.443557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:38:27.969 [2024-07-15 07:47:06.443572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:38:27.969 [2024-07-15 07:47:06.443584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:38:27.969 [2024-07-15 07:47:06.443597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:38:27.969 [2024-07-15 07:47:06.443608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:38:27.969 [2024-07-15 07:47:06.443621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:38:27.969 [2024-07-15 07:47:06.443633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:38:27.969 [2024-07-15 07:47:06.443645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:38:27.969 [2024-07-15 07:47:06.443657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:38:27.969 [2024-07-15 07:47:06.443669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:38:27.969 [2024-07-15 07:47:06.443681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:38:27.969 [2024-07-15 07:47:06.443693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:38:27.969 [2024-07-15 07:47:06.443706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:38:27.969 [2024-07-15 07:47:06.443718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:38:27.969 [2024-07-15 07:47:06.443730] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:38:27.969 [2024-07-15 07:47:06.443744] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:27.969 [2024-07-15 07:47:06.443766] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:38:27.969 [2024-07-15 07:47:06.443779] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:38:27.969 [2024-07-15 07:47:06.443791] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:38:27.969 [2024-07-15 07:47:06.443803] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:38:27.969 [2024-07-15 07:47:06.443815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:27.969 [2024-07-15 07:47:06.443839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:38:27.969 [2024-07-15 07:47:06.443852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.040 ms 00:38:27.969 [2024-07-15 07:47:06.443864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:27.969 [2024-07-15 07:47:06.498609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:27.969 [2024-07-15 07:47:06.498688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:27.969 [2024-07-15 07:47:06.498713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.654 ms 00:38:27.969 [2024-07-15 07:47:06.498728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:27.969 [2024-07-15 07:47:06.499019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:27.969 [2024-07-15 07:47:06.499052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:38:27.969 [2024-07-15 07:47:06.499069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:38:27.969 [2024-07-15 07:47:06.499090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:27.969 [2024-07-15 07:47:06.548037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:27.969 [2024-07-15 07:47:06.548123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:27.969 [2024-07-15 07:47:06.548145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.907 ms 00:38:27.969 [2024-07-15 07:47:06.548158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:27.969 [2024-07-15 07:47:06.548322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:27.969 [2024-07-15 07:47:06.548343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:27.969 [2024-07-15 07:47:06.548373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:38:27.969 [2024-07-15 07:47:06.548386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:27.969 [2024-07-15 07:47:06.549176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:27.969 [2024-07-15 07:47:06.549221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:27.969 [2024-07-15 07:47:06.549238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.744 ms 00:38:27.969 [2024-07-15 07:47:06.549251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:27.969 [2024-07-15 07:47:06.549457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:27.969 [2024-07-15 07:47:06.549498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:27.969 [2024-07-15 07:47:06.549512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:38:27.969 [2024-07-15 07:47:06.549525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:27.969 [2024-07-15 07:47:06.571005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:27.969 [2024-07-15 07:47:06.571083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:27.969 [2024-07-15 07:47:06.571116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.443 ms 00:38:27.969 [2024-07-15 07:47:06.571130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.235 [2024-07-15 07:47:06.588897] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:38:28.235 [2024-07-15 07:47:06.588978] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:38:28.235 [2024-07-15 07:47:06.589001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.235 [2024-07-15 07:47:06.589016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:38:28.235 [2024-07-15 07:47:06.589032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.664 ms 00:38:28.235 [2024-07-15 07:47:06.589045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.235 [2024-07-15 07:47:06.619006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.235 [2024-07-15 07:47:06.619084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:38:28.235 [2024-07-15 07:47:06.619107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.829 ms 00:38:28.235 [2024-07-15 07:47:06.619121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.235 [2024-07-15 07:47:06.636167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.235 [2024-07-15 07:47:06.636269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:38:28.235 [2024-07-15 07:47:06.636292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.875 ms 00:38:28.235 [2024-07-15 07:47:06.636307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.235 [2024-07-15 07:47:06.654889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.236 [2024-07-15 07:47:06.654984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:38:28.236 [2024-07-15 07:47:06.655007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.350 ms 00:38:28.236 [2024-07-15 07:47:06.655020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.236 [2024-07-15 07:47:06.656168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.236 [2024-07-15 07:47:06.656228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:38:28.236 [2024-07-15 07:47:06.656246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.910 ms 00:38:28.236 [2024-07-15 07:47:06.656259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.236 [2024-07-15 07:47:06.741310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.236 [2024-07-15 07:47:06.741436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:38:28.236 [2024-07-15 07:47:06.741474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.998 ms 00:38:28.236 [2024-07-15 07:47:06.741490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.236 [2024-07-15 07:47:06.754686] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:38:28.236 [2024-07-15 07:47:06.782176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.236 [2024-07-15 07:47:06.782319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:38:28.236 [2024-07-15 07:47:06.782342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.490 ms 00:38:28.236 [2024-07-15 07:47:06.782356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.236 [2024-07-15 07:47:06.782584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.236 [2024-07-15 07:47:06.782606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:38:28.236 [2024-07-15 07:47:06.782627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:38:28.236 [2024-07-15 07:47:06.782639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.236 [2024-07-15 07:47:06.782745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.236 [2024-07-15 07:47:06.782769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:38:28.236 [2024-07-15 07:47:06.782784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:38:28.236 [2024-07-15 07:47:06.782796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.236 [2024-07-15 07:47:06.782834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.236 [2024-07-15 07:47:06.782850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:38:28.236 [2024-07-15 07:47:06.782865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:38:28.236 [2024-07-15 07:47:06.782884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.236 [2024-07-15 07:47:06.782933] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:38:28.236 [2024-07-15 07:47:06.782955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.236 [2024-07-15 07:47:06.782969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:38:28.236 [2024-07-15 07:47:06.782997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:38:28.236 [2024-07-15 07:47:06.783010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.236 [2024-07-15 07:47:06.815135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.236 [2024-07-15 07:47:06.815195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:38:28.236 [2024-07-15 07:47:06.815224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.089 ms 00:38:28.236 [2024-07-15 07:47:06.815237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.236 [2024-07-15 07:47:06.815396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.236 [2024-07-15 07:47:06.815418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:38:28.236 [2024-07-15 07:47:06.815432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:38:28.236 [2024-07-15 07:47:06.815445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.236 [2024-07-15 07:47:06.816911] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:38:28.236 [2024-07-15 07:47:06.820998] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 421.210 ms, result 0 00:38:28.236 [2024-07-15 07:47:06.822035] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:38:28.236 [2024-07-15 07:47:06.838505] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:38:28.500  Copying: 4096/4096 [kB] (average 24 MBps)[2024-07-15 07:47:07.004568] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:38:28.500 [2024-07-15 07:47:07.017904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.500 [2024-07-15 07:47:07.017986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:38:28.500 [2024-07-15 07:47:07.018011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:38:28.500 [2024-07-15 07:47:07.018026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.500 [2024-07-15 07:47:07.018065] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:38:28.500 [2024-07-15 07:47:07.022163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.500 [2024-07-15 07:47:07.022221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:38:28.500 [2024-07-15 07:47:07.022250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.071 ms 00:38:28.500 [2024-07-15 07:47:07.022264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.500 [2024-07-15 07:47:07.024196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.500 [2024-07-15 07:47:07.024241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:38:28.500 [2024-07-15 07:47:07.024261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.890 ms 00:38:28.500 [2024-07-15 07:47:07.024274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.500 [2024-07-15 07:47:07.028288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.500 [2024-07-15 07:47:07.028334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:38:28.500 [2024-07-15 07:47:07.028352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.984 ms 00:38:28.500 [2024-07-15 07:47:07.028377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.500 [2024-07-15 07:47:07.035797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.500 [2024-07-15 07:47:07.035872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:38:28.500 [2024-07-15 07:47:07.035892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.369 ms 00:38:28.500 [2024-07-15 07:47:07.035906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.500 [2024-07-15 07:47:07.068422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.500 [2024-07-15 07:47:07.068513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:38:28.500 [2024-07-15 07:47:07.068535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.420 ms 00:38:28.500 [2024-07-15 07:47:07.068549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.500 [2024-07-15 07:47:07.087058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.500 [2024-07-15 07:47:07.087127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:38:28.500 [2024-07-15 07:47:07.087162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.405 ms 00:38:28.500 [2024-07-15 07:47:07.087175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.500 [2024-07-15 07:47:07.087433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.500 [2024-07-15 07:47:07.087468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:38:28.500 [2024-07-15 07:47:07.087485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:38:28.500 [2024-07-15 07:47:07.087498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.759 [2024-07-15 07:47:07.118566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.759 [2024-07-15 07:47:07.118659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:38:28.759 [2024-07-15 07:47:07.118681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.038 ms 00:38:28.759 [2024-07-15 07:47:07.118694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.759 [2024-07-15 07:47:07.149477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.759 [2024-07-15 07:47:07.149553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:38:28.759 [2024-07-15 07:47:07.149575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.678 ms 00:38:28.759 [2024-07-15 07:47:07.149589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.759 [2024-07-15 07:47:07.180111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.759 [2024-07-15 07:47:07.180187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:38:28.759 [2024-07-15 07:47:07.180208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.428 ms 00:38:28.759 [2024-07-15 07:47:07.180222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.759 [2024-07-15 07:47:07.210799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.759 [2024-07-15 07:47:07.210864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:38:28.759 [2024-07-15 07:47:07.210885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.430 ms 00:38:28.759 [2024-07-15 07:47:07.210898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.759 [2024-07-15 07:47:07.210981] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:38:28.759 [2024-07-15 07:47:07.211011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:38:28.759 [2024-07-15 07:47:07.211038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:38:28.759 [2024-07-15 07:47:07.211053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:38:28.759 [2024-07-15 07:47:07.211066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:38:28.759 [2024-07-15 07:47:07.211079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:38:28.759 [2024-07-15 07:47:07.211092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:38:28.759 [2024-07-15 07:47:07.211105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:38:28.759 [2024-07-15 07:47:07.211118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:38:28.759 [2024-07-15 07:47:07.211131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:38:28.759 [2024-07-15 07:47:07.211144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:38:28.759 [2024-07-15 07:47:07.211157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:38:28.759 [2024-07-15 07:47:07.211170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:38:28.759 [2024-07-15 07:47:07.211183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:38:28.759 [2024-07-15 07:47:07.211195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:38:28.759 [2024-07-15 07:47:07.211208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:38:28.759 [2024-07-15 07:47:07.211221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:38:28.759 [2024-07-15 07:47:07.211244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:38:28.759 [2024-07-15 07:47:07.211257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:38:28.759 [2024-07-15 07:47:07.211270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.211996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.212008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.212021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.212034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.212047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.212060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.212072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.212085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.212097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.212110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.212123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.212136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.212148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.212161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.212174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.212186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.212199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.212211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.212224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.212237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.212250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.212263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.212275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.212288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.212302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.212315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.212328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.212341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.212354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:38:28.760 [2024-07-15 07:47:07.212377] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:38:28.760 [2024-07-15 07:47:07.212389] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d442620d-548b-4e89-8b2c-9e30b59e312d 00:38:28.760 [2024-07-15 07:47:07.212402] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:38:28.760 [2024-07-15 07:47:07.212414] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:38:28.760 [2024-07-15 07:47:07.212445] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:38:28.760 [2024-07-15 07:47:07.212471] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:38:28.760 [2024-07-15 07:47:07.212484] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:38:28.760 [2024-07-15 07:47:07.212496] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:38:28.760 [2024-07-15 07:47:07.212509] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:38:28.760 [2024-07-15 07:47:07.212520] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:38:28.760 [2024-07-15 07:47:07.212531] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:38:28.760 [2024-07-15 07:47:07.212543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.760 [2024-07-15 07:47:07.212556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:38:28.760 [2024-07-15 07:47:07.212569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.572 ms 00:38:28.760 [2024-07-15 07:47:07.212587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.760 [2024-07-15 07:47:07.230106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.760 [2024-07-15 07:47:07.230167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:38:28.760 [2024-07-15 07:47:07.230187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.487 ms 00:38:28.760 [2024-07-15 07:47:07.230201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.761 [2024-07-15 07:47:07.230779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:28.761 [2024-07-15 07:47:07.230813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:38:28.761 [2024-07-15 07:47:07.230837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.501 ms 00:38:28.761 [2024-07-15 07:47:07.230850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.761 [2024-07-15 07:47:07.273987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:28.761 [2024-07-15 07:47:07.274080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:28.761 [2024-07-15 07:47:07.274102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:28.761 [2024-07-15 07:47:07.274116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.761 [2024-07-15 07:47:07.274277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:28.761 [2024-07-15 07:47:07.274296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:28.761 [2024-07-15 07:47:07.274321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:28.761 [2024-07-15 07:47:07.274333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.761 [2024-07-15 07:47:07.274415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:28.761 [2024-07-15 07:47:07.274446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:28.761 [2024-07-15 07:47:07.274484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:28.761 [2024-07-15 07:47:07.274497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:28.761 [2024-07-15 07:47:07.274526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:28.761 [2024-07-15 07:47:07.274543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:28.761 [2024-07-15 07:47:07.274556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:28.761 [2024-07-15 07:47:07.274576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.020 [2024-07-15 07:47:07.390750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:29.020 [2024-07-15 07:47:07.390854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:29.020 [2024-07-15 07:47:07.390879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:29.020 [2024-07-15 07:47:07.390893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.020 [2024-07-15 07:47:07.483721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:29.020 [2024-07-15 07:47:07.483824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:29.020 [2024-07-15 07:47:07.483862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:29.020 [2024-07-15 07:47:07.483877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.020 [2024-07-15 07:47:07.483985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:29.020 [2024-07-15 07:47:07.484005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:29.020 [2024-07-15 07:47:07.484019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:29.020 [2024-07-15 07:47:07.484032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.020 [2024-07-15 07:47:07.484075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:29.020 [2024-07-15 07:47:07.484092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:29.020 [2024-07-15 07:47:07.484105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:29.020 [2024-07-15 07:47:07.484119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.020 [2024-07-15 07:47:07.484263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:29.020 [2024-07-15 07:47:07.484284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:29.020 [2024-07-15 07:47:07.484298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:29.020 [2024-07-15 07:47:07.484311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.020 [2024-07-15 07:47:07.484369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:29.020 [2024-07-15 07:47:07.484388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:38:29.020 [2024-07-15 07:47:07.484402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:29.020 [2024-07-15 07:47:07.484414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.020 [2024-07-15 07:47:07.484501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:29.020 [2024-07-15 07:47:07.484522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:29.020 [2024-07-15 07:47:07.484536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:29.020 [2024-07-15 07:47:07.484548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.020 [2024-07-15 07:47:07.484612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:29.020 [2024-07-15 07:47:07.484630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:29.020 [2024-07-15 07:47:07.484644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:29.020 [2024-07-15 07:47:07.484656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.020 [2024-07-15 07:47:07.484866] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 466.975 ms, result 0 00:38:30.392 00:38:30.392 00:38:30.392 07:47:08 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=81613 00:38:30.392 07:47:08 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 81613 00:38:30.392 07:47:08 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:38:30.392 07:47:08 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 81613 ']' 00:38:30.393 07:47:08 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:30.393 07:47:08 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:30.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:30.393 07:47:08 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:30.393 07:47:08 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:30.393 07:47:08 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:38:30.393 [2024-07-15 07:47:08.906835] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:38:30.393 [2024-07-15 07:47:08.907050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81613 ] 00:38:30.650 [2024-07-15 07:47:09.088906] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:30.908 [2024-07-15 07:47:09.361400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:31.839 07:47:10 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:31.839 07:47:10 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:38:31.839 07:47:10 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:38:32.097 [2024-07-15 07:47:10.613851] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:38:32.097 [2024-07-15 07:47:10.613952] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:38:32.356 [2024-07-15 07:47:10.797097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:32.356 [2024-07-15 07:47:10.797192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:38:32.356 [2024-07-15 07:47:10.797215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:38:32.356 [2024-07-15 07:47:10.797231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:32.356 [2024-07-15 07:47:10.800938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:32.356 [2024-07-15 07:47:10.801008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:32.356 [2024-07-15 07:47:10.801027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.676 ms 00:38:32.356 [2024-07-15 07:47:10.801042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:32.356 [2024-07-15 07:47:10.801360] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:38:32.356 [2024-07-15 07:47:10.802424] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:38:32.356 [2024-07-15 07:47:10.802476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:32.356 [2024-07-15 07:47:10.802496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:32.356 [2024-07-15 07:47:10.802511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.132 ms 00:38:32.356 [2024-07-15 07:47:10.802526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:32.356 [2024-07-15 07:47:10.805203] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:38:32.356 [2024-07-15 07:47:10.824157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:32.356 [2024-07-15 07:47:10.824258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:38:32.356 [2024-07-15 07:47:10.824286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.936 ms 00:38:32.356 [2024-07-15 07:47:10.824299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:32.356 [2024-07-15 07:47:10.824577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:32.356 [2024-07-15 07:47:10.824609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:38:32.356 [2024-07-15 07:47:10.824629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:38:32.356 [2024-07-15 07:47:10.824641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:32.356 [2024-07-15 07:47:10.837818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:32.356 [2024-07-15 07:47:10.837905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:32.356 [2024-07-15 07:47:10.837941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.076 ms 00:38:32.356 [2024-07-15 07:47:10.837955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:32.356 [2024-07-15 07:47:10.838222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:32.356 [2024-07-15 07:47:10.838253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:32.356 [2024-07-15 07:47:10.838271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:38:32.356 [2024-07-15 07:47:10.838284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:32.356 [2024-07-15 07:47:10.838346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:32.356 [2024-07-15 07:47:10.838362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:38:32.356 [2024-07-15 07:47:10.838379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:38:32.356 [2024-07-15 07:47:10.838391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:32.356 [2024-07-15 07:47:10.838436] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:38:32.356 [2024-07-15 07:47:10.844273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:32.356 [2024-07-15 07:47:10.844321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:32.356 [2024-07-15 07:47:10.844346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.852 ms 00:38:32.356 [2024-07-15 07:47:10.844361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:32.356 [2024-07-15 07:47:10.844478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:32.356 [2024-07-15 07:47:10.844507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:38:32.356 [2024-07-15 07:47:10.844521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:38:32.356 [2024-07-15 07:47:10.844541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:32.356 [2024-07-15 07:47:10.844578] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:38:32.356 [2024-07-15 07:47:10.844613] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:38:32.356 [2024-07-15 07:47:10.844668] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:38:32.356 [2024-07-15 07:47:10.844697] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:38:32.356 [2024-07-15 07:47:10.844806] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:38:32.356 [2024-07-15 07:47:10.844839] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:38:32.356 [2024-07-15 07:47:10.844860] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:38:32.356 [2024-07-15 07:47:10.844879] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:38:32.356 [2024-07-15 07:47:10.844894] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:38:32.356 [2024-07-15 07:47:10.844910] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:38:32.356 [2024-07-15 07:47:10.844921] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:38:32.356 [2024-07-15 07:47:10.844935] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:38:32.356 [2024-07-15 07:47:10.844947] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:38:32.356 [2024-07-15 07:47:10.844966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:32.356 [2024-07-15 07:47:10.844978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:38:32.356 [2024-07-15 07:47:10.844994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:38:32.356 [2024-07-15 07:47:10.845006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:32.356 [2024-07-15 07:47:10.845110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:32.356 [2024-07-15 07:47:10.845133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:38:32.356 [2024-07-15 07:47:10.845150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:38:32.356 [2024-07-15 07:47:10.845161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:32.356 [2024-07-15 07:47:10.845291] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:38:32.356 [2024-07-15 07:47:10.845324] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:38:32.356 [2024-07-15 07:47:10.845341] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:32.356 [2024-07-15 07:47:10.845353] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:32.356 [2024-07-15 07:47:10.845368] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:38:32.356 [2024-07-15 07:47:10.845382] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:38:32.356 [2024-07-15 07:47:10.845399] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:38:32.356 [2024-07-15 07:47:10.845410] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:38:32.356 [2024-07-15 07:47:10.845428] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:38:32.356 [2024-07-15 07:47:10.845438] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:32.356 [2024-07-15 07:47:10.845466] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:38:32.356 [2024-07-15 07:47:10.845481] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:38:32.356 [2024-07-15 07:47:10.845494] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:32.356 [2024-07-15 07:47:10.845505] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:38:32.356 [2024-07-15 07:47:10.845519] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:38:32.356 [2024-07-15 07:47:10.845529] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:32.356 [2024-07-15 07:47:10.845543] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:38:32.356 [2024-07-15 07:47:10.845554] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:38:32.356 [2024-07-15 07:47:10.845567] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:32.356 [2024-07-15 07:47:10.845578] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:38:32.356 [2024-07-15 07:47:10.845591] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:38:32.356 [2024-07-15 07:47:10.845602] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:32.356 [2024-07-15 07:47:10.845615] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:38:32.356 [2024-07-15 07:47:10.845625] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:38:32.356 [2024-07-15 07:47:10.845641] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:32.356 [2024-07-15 07:47:10.845652] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:38:32.357 [2024-07-15 07:47:10.845665] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:38:32.357 [2024-07-15 07:47:10.845688] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:32.357 [2024-07-15 07:47:10.845702] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:38:32.357 [2024-07-15 07:47:10.845713] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:38:32.357 [2024-07-15 07:47:10.845728] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:32.357 [2024-07-15 07:47:10.845739] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:38:32.357 [2024-07-15 07:47:10.845752] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:38:32.357 [2024-07-15 07:47:10.845772] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:32.357 [2024-07-15 07:47:10.845785] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:38:32.357 [2024-07-15 07:47:10.845795] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:38:32.357 [2024-07-15 07:47:10.845810] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:32.357 [2024-07-15 07:47:10.845821] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:38:32.357 [2024-07-15 07:47:10.845835] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:38:32.357 [2024-07-15 07:47:10.845846] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:32.357 [2024-07-15 07:47:10.845862] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:38:32.357 [2024-07-15 07:47:10.845873] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:38:32.357 [2024-07-15 07:47:10.845886] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:32.357 [2024-07-15 07:47:10.845897] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:38:32.357 [2024-07-15 07:47:10.845915] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:38:32.357 [2024-07-15 07:47:10.845926] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:32.357 [2024-07-15 07:47:10.845941] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:32.357 [2024-07-15 07:47:10.845952] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:38:32.357 [2024-07-15 07:47:10.845966] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:38:32.357 [2024-07-15 07:47:10.845977] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:38:32.357 [2024-07-15 07:47:10.845990] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:38:32.357 [2024-07-15 07:47:10.846000] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:38:32.357 [2024-07-15 07:47:10.846014] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:38:32.357 [2024-07-15 07:47:10.846027] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:38:32.357 [2024-07-15 07:47:10.846044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:32.357 [2024-07-15 07:47:10.846058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:38:32.357 [2024-07-15 07:47:10.846077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:38:32.357 [2024-07-15 07:47:10.846088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:38:32.357 [2024-07-15 07:47:10.846102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:38:32.357 [2024-07-15 07:47:10.846114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:38:32.357 [2024-07-15 07:47:10.846128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:38:32.357 [2024-07-15 07:47:10.846139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:38:32.357 [2024-07-15 07:47:10.846153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:38:32.357 [2024-07-15 07:47:10.846165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:38:32.357 [2024-07-15 07:47:10.846178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:38:32.357 [2024-07-15 07:47:10.846190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:38:32.357 [2024-07-15 07:47:10.846204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:38:32.357 [2024-07-15 07:47:10.846216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:38:32.357 [2024-07-15 07:47:10.846231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:38:32.357 [2024-07-15 07:47:10.846244] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:38:32.357 [2024-07-15 07:47:10.846260] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:32.357 [2024-07-15 07:47:10.846273] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:38:32.357 [2024-07-15 07:47:10.846292] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:38:32.357 [2024-07-15 07:47:10.846303] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:38:32.357 [2024-07-15 07:47:10.846318] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:38:32.357 [2024-07-15 07:47:10.846331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:32.357 [2024-07-15 07:47:10.846347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:38:32.357 [2024-07-15 07:47:10.846359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.109 ms 00:38:32.357 [2024-07-15 07:47:10.846373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:32.357 [2024-07-15 07:47:10.892603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:32.357 [2024-07-15 07:47:10.892693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:32.357 [2024-07-15 07:47:10.892717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.116 ms 00:38:32.357 [2024-07-15 07:47:10.892738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:32.357 [2024-07-15 07:47:10.892973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:32.357 [2024-07-15 07:47:10.893005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:38:32.357 [2024-07-15 07:47:10.893021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:38:32.357 [2024-07-15 07:47:10.893036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:32.357 [2024-07-15 07:47:10.941248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:32.357 [2024-07-15 07:47:10.941341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:32.357 [2024-07-15 07:47:10.941362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.174 ms 00:38:32.357 [2024-07-15 07:47:10.941378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:32.357 [2024-07-15 07:47:10.941558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:32.357 [2024-07-15 07:47:10.941584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:32.357 [2024-07-15 07:47:10.941599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:38:32.357 [2024-07-15 07:47:10.941619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:32.357 [2024-07-15 07:47:10.942362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:32.357 [2024-07-15 07:47:10.942394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:32.357 [2024-07-15 07:47:10.942414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.711 ms 00:38:32.357 [2024-07-15 07:47:10.942430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:32.357 [2024-07-15 07:47:10.942635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:32.357 [2024-07-15 07:47:10.942668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:32.357 [2024-07-15 07:47:10.942682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:38:32.357 [2024-07-15 07:47:10.942696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:32.357 [2024-07-15 07:47:10.966508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:32.357 [2024-07-15 07:47:10.966596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:32.357 [2024-07-15 07:47:10.966619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.775 ms 00:38:32.357 [2024-07-15 07:47:10.966635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:32.615 [2024-07-15 07:47:10.984215] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:38:32.615 [2024-07-15 07:47:10.984294] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:38:32.615 [2024-07-15 07:47:10.984317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:32.615 [2024-07-15 07:47:10.984335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:38:32.615 [2024-07-15 07:47:10.984354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.456 ms 00:38:32.615 [2024-07-15 07:47:10.984369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:32.615 [2024-07-15 07:47:11.014712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:32.615 [2024-07-15 07:47:11.014853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:38:32.615 [2024-07-15 07:47:11.014877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.162 ms 00:38:32.615 [2024-07-15 07:47:11.014893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:32.615 [2024-07-15 07:47:11.032819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:32.615 [2024-07-15 07:47:11.032925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:38:32.615 [2024-07-15 07:47:11.032963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.701 ms 00:38:32.615 [2024-07-15 07:47:11.032984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:32.615 [2024-07-15 07:47:11.049965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:32.615 [2024-07-15 07:47:11.050068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:38:32.615 [2024-07-15 07:47:11.050089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.794 ms 00:38:32.616 [2024-07-15 07:47:11.050105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:32.616 [2024-07-15 07:47:11.051239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:32.616 [2024-07-15 07:47:11.051280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:38:32.616 [2024-07-15 07:47:11.051296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.911 ms 00:38:32.616 [2024-07-15 07:47:11.051311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:32.616 [2024-07-15 07:47:11.155165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:32.616 [2024-07-15 07:47:11.155298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:38:32.616 [2024-07-15 07:47:11.155334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.813 ms 00:38:32.616 [2024-07-15 07:47:11.155350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:32.616 [2024-07-15 07:47:11.174444] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:38:32.616 [2024-07-15 07:47:11.203213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:32.616 [2024-07-15 07:47:11.203308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:38:32.616 [2024-07-15 07:47:11.203338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.598 ms 00:38:32.616 [2024-07-15 07:47:11.203357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:32.616 [2024-07-15 07:47:11.203565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:32.616 [2024-07-15 07:47:11.203588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:38:32.616 [2024-07-15 07:47:11.203606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:38:32.616 [2024-07-15 07:47:11.203618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:32.616 [2024-07-15 07:47:11.203725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:32.616 [2024-07-15 07:47:11.203742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:38:32.616 [2024-07-15 07:47:11.203758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:38:32.616 [2024-07-15 07:47:11.203770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:32.616 [2024-07-15 07:47:11.203815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:32.616 [2024-07-15 07:47:11.203830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:38:32.616 [2024-07-15 07:47:11.203851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:38:32.616 [2024-07-15 07:47:11.203863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:32.616 [2024-07-15 07:47:11.203913] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:38:32.616 [2024-07-15 07:47:11.203929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:32.616 [2024-07-15 07:47:11.203948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:38:32.616 [2024-07-15 07:47:11.203961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:38:32.616 [2024-07-15 07:47:11.203975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:32.874 [2024-07-15 07:47:11.238006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:32.874 [2024-07-15 07:47:11.238096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:38:32.874 [2024-07-15 07:47:11.238119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.993 ms 00:38:32.874 [2024-07-15 07:47:11.238136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:32.874 [2024-07-15 07:47:11.238331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:32.874 [2024-07-15 07:47:11.238358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:38:32.874 [2024-07-15 07:47:11.238373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:38:32.874 [2024-07-15 07:47:11.238388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:32.874 [2024-07-15 07:47:11.239927] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:38:32.874 [2024-07-15 07:47:11.244331] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 442.374 ms, result 0 00:38:32.874 [2024-07-15 07:47:11.245353] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:38:32.874 Some configs were skipped because the RPC state that can call them passed over. 00:38:32.874 07:47:11 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:38:33.131 [2024-07-15 07:47:11.559215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.131 [2024-07-15 07:47:11.559280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:38:33.131 [2024-07-15 07:47:11.559309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.546 ms 00:38:33.131 [2024-07-15 07:47:11.559324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.131 [2024-07-15 07:47:11.559377] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.730 ms, result 0 00:38:33.131 true 00:38:33.131 07:47:11 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:38:33.389 [2024-07-15 07:47:11.859159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.389 [2024-07-15 07:47:11.859241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:38:33.389 [2024-07-15 07:47:11.859263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.171 ms 00:38:33.390 [2024-07-15 07:47:11.859279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.390 [2024-07-15 07:47:11.859336] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.357 ms, result 0 00:38:33.390 true 00:38:33.390 07:47:11 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 81613 00:38:33.390 07:47:11 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 81613 ']' 00:38:33.390 07:47:11 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 81613 00:38:33.390 07:47:11 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:38:33.390 07:47:11 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:33.390 07:47:11 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81613 00:38:33.390 07:47:11 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:33.390 07:47:11 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:33.390 killing process with pid 81613 00:38:33.390 07:47:11 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81613' 00:38:33.390 07:47:11 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 81613 00:38:33.390 07:47:11 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 81613 00:38:34.766 [2024-07-15 07:47:13.038023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.766 [2024-07-15 07:47:13.038100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:38:34.766 [2024-07-15 07:47:13.038125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:38:34.766 [2024-07-15 07:47:13.038139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.766 [2024-07-15 07:47:13.038178] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:38:34.766 [2024-07-15 07:47:13.042271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.766 [2024-07-15 07:47:13.042326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:38:34.766 [2024-07-15 07:47:13.042345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.066 ms 00:38:34.766 [2024-07-15 07:47:13.042365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.766 [2024-07-15 07:47:13.042740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.766 [2024-07-15 07:47:13.042776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:38:34.766 [2024-07-15 07:47:13.042792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:38:34.766 [2024-07-15 07:47:13.042807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.766 [2024-07-15 07:47:13.046861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.766 [2024-07-15 07:47:13.046927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:38:34.766 [2024-07-15 07:47:13.046948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.028 ms 00:38:34.766 [2024-07-15 07:47:13.046963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.766 [2024-07-15 07:47:13.054330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.766 [2024-07-15 07:47:13.054394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:38:34.766 [2024-07-15 07:47:13.054412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.287 ms 00:38:34.766 [2024-07-15 07:47:13.054432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.766 [2024-07-15 07:47:13.067993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.766 [2024-07-15 07:47:13.068084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:38:34.766 [2024-07-15 07:47:13.068106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.451 ms 00:38:34.766 [2024-07-15 07:47:13.068126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.766 [2024-07-15 07:47:13.077473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.766 [2024-07-15 07:47:13.077566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:38:34.766 [2024-07-15 07:47:13.077592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.240 ms 00:38:34.766 [2024-07-15 07:47:13.077608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.766 [2024-07-15 07:47:13.077820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.766 [2024-07-15 07:47:13.077847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:38:34.766 [2024-07-15 07:47:13.077862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:38:34.766 [2024-07-15 07:47:13.077895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.766 [2024-07-15 07:47:13.091683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.766 [2024-07-15 07:47:13.091796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:38:34.766 [2024-07-15 07:47:13.091817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.752 ms 00:38:34.766 [2024-07-15 07:47:13.091833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.766 [2024-07-15 07:47:13.104795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.766 [2024-07-15 07:47:13.104877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:38:34.766 [2024-07-15 07:47:13.104897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.841 ms 00:38:34.766 [2024-07-15 07:47:13.104921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.766 [2024-07-15 07:47:13.117512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.766 [2024-07-15 07:47:13.117604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:38:34.766 [2024-07-15 07:47:13.117626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.497 ms 00:38:34.766 [2024-07-15 07:47:13.117641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.766 [2024-07-15 07:47:13.130545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.766 [2024-07-15 07:47:13.130623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:38:34.766 [2024-07-15 07:47:13.130645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.756 ms 00:38:34.766 [2024-07-15 07:47:13.130661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.766 [2024-07-15 07:47:13.130758] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:38:34.766 [2024-07-15 07:47:13.130792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:38:34.766 [2024-07-15 07:47:13.130809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:38:34.766 [2024-07-15 07:47:13.130825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:38:34.766 [2024-07-15 07:47:13.130838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:38:34.766 [2024-07-15 07:47:13.130859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:38:34.766 [2024-07-15 07:47:13.130872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:38:34.766 [2024-07-15 07:47:13.130893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:38:34.766 [2024-07-15 07:47:13.130905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:38:34.766 [2024-07-15 07:47:13.130921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:38:34.766 [2024-07-15 07:47:13.130934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:38:34.766 [2024-07-15 07:47:13.130949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:38:34.766 [2024-07-15 07:47:13.130962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:38:34.766 [2024-07-15 07:47:13.130987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:38:34.766 [2024-07-15 07:47:13.131002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:38:34.766 [2024-07-15 07:47:13.131017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:38:34.766 [2024-07-15 07:47:13.131029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:38:34.766 [2024-07-15 07:47:13.131047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:38:34.766 [2024-07-15 07:47:13.131059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:38:34.766 [2024-07-15 07:47:13.131075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:38:34.766 [2024-07-15 07:47:13.131087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:38:34.766 [2024-07-15 07:47:13.131102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:38:34.766 [2024-07-15 07:47:13.131114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:38:34.766 [2024-07-15 07:47:13.131133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:38:34.766 [2024-07-15 07:47:13.131145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:38:34.766 [2024-07-15 07:47:13.131160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:38:34.766 [2024-07-15 07:47:13.131172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:38:34.766 [2024-07-15 07:47:13.131187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:38:34.766 [2024-07-15 07:47:13.131199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:38:34.766 [2024-07-15 07:47:13.131215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:38:34.766 [2024-07-15 07:47:13.131228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:38:34.766 [2024-07-15 07:47:13.131244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:38:34.766 [2024-07-15 07:47:13.131256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.131993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.132005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.132020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.132032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.132050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.132063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.132078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.132090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.132105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.132118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.132132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.132146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.132162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.132174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.132191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.132203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.132219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.132231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:38:34.767 [2024-07-15 07:47:13.132257] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:38:34.767 [2024-07-15 07:47:13.132269] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d442620d-548b-4e89-8b2c-9e30b59e312d 00:38:34.767 [2024-07-15 07:47:13.132292] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:38:34.767 [2024-07-15 07:47:13.132303] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:38:34.767 [2024-07-15 07:47:13.132317] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:38:34.767 [2024-07-15 07:47:13.132330] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:38:34.767 [2024-07-15 07:47:13.132343] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:38:34.767 [2024-07-15 07:47:13.132355] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:38:34.767 [2024-07-15 07:47:13.132370] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:38:34.767 [2024-07-15 07:47:13.132380] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:38:34.767 [2024-07-15 07:47:13.132411] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:38:34.767 [2024-07-15 07:47:13.132423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.767 [2024-07-15 07:47:13.132439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:38:34.768 [2024-07-15 07:47:13.132465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.668 ms 00:38:34.768 [2024-07-15 07:47:13.132482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.768 [2024-07-15 07:47:13.151574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.768 [2024-07-15 07:47:13.151683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:38:34.768 [2024-07-15 07:47:13.151705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.022 ms 00:38:34.768 [2024-07-15 07:47:13.151726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.768 [2024-07-15 07:47:13.152346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.768 [2024-07-15 07:47:13.152379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:38:34.768 [2024-07-15 07:47:13.152399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.488 ms 00:38:34.768 [2024-07-15 07:47:13.152418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.768 [2024-07-15 07:47:13.211385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:34.768 [2024-07-15 07:47:13.211488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:34.768 [2024-07-15 07:47:13.211511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:34.768 [2024-07-15 07:47:13.211527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.768 [2024-07-15 07:47:13.211707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:34.768 [2024-07-15 07:47:13.211731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:34.768 [2024-07-15 07:47:13.211745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:34.768 [2024-07-15 07:47:13.211764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.768 [2024-07-15 07:47:13.211839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:34.768 [2024-07-15 07:47:13.211863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:34.768 [2024-07-15 07:47:13.211877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:34.768 [2024-07-15 07:47:13.211894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.768 [2024-07-15 07:47:13.211922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:34.768 [2024-07-15 07:47:13.211939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:34.768 [2024-07-15 07:47:13.211952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:34.768 [2024-07-15 07:47:13.211966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.768 [2024-07-15 07:47:13.328482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:34.768 [2024-07-15 07:47:13.328593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:34.768 [2024-07-15 07:47:13.328617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:34.768 [2024-07-15 07:47:13.328633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:35.026 [2024-07-15 07:47:13.428671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:35.026 [2024-07-15 07:47:13.428783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:35.026 [2024-07-15 07:47:13.428805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:35.026 [2024-07-15 07:47:13.428821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:35.026 [2024-07-15 07:47:13.428962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:35.026 [2024-07-15 07:47:13.428986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:35.026 [2024-07-15 07:47:13.428999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:35.026 [2024-07-15 07:47:13.429018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:35.026 [2024-07-15 07:47:13.429062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:35.026 [2024-07-15 07:47:13.429079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:35.026 [2024-07-15 07:47:13.429092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:35.026 [2024-07-15 07:47:13.429107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:35.026 [2024-07-15 07:47:13.429250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:35.026 [2024-07-15 07:47:13.429274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:35.026 [2024-07-15 07:47:13.429288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:35.026 [2024-07-15 07:47:13.429302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:35.026 [2024-07-15 07:47:13.429366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:35.026 [2024-07-15 07:47:13.429389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:38:35.026 [2024-07-15 07:47:13.429402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:35.026 [2024-07-15 07:47:13.429417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:35.026 [2024-07-15 07:47:13.429496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:35.026 [2024-07-15 07:47:13.429521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:35.026 [2024-07-15 07:47:13.429534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:35.026 [2024-07-15 07:47:13.429551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:35.026 [2024-07-15 07:47:13.429616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:35.026 [2024-07-15 07:47:13.429637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:35.026 [2024-07-15 07:47:13.429650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:35.026 [2024-07-15 07:47:13.429664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:35.026 [2024-07-15 07:47:13.429872] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 391.826 ms, result 0 00:38:35.960 07:47:14 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:38:36.219 [2024-07-15 07:47:14.633670] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:38:36.219 [2024-07-15 07:47:14.633849] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81688 ] 00:38:36.219 [2024-07-15 07:47:14.803130] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:36.786 [2024-07-15 07:47:15.095062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:37.046 [2024-07-15 07:47:15.485118] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:38:37.046 [2024-07-15 07:47:15.485208] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:38:37.046 [2024-07-15 07:47:15.653100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.046 [2024-07-15 07:47:15.653179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:38:37.046 [2024-07-15 07:47:15.653202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:38:37.046 [2024-07-15 07:47:15.653216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.046 [2024-07-15 07:47:15.657013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.046 [2024-07-15 07:47:15.657061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:37.046 [2024-07-15 07:47:15.657079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.764 ms 00:38:37.046 [2024-07-15 07:47:15.657092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.046 [2024-07-15 07:47:15.657291] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:38:37.046 [2024-07-15 07:47:15.658343] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:38:37.046 [2024-07-15 07:47:15.658386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.046 [2024-07-15 07:47:15.658401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:37.046 [2024-07-15 07:47:15.658416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.109 ms 00:38:37.046 [2024-07-15 07:47:15.658429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.319 [2024-07-15 07:47:15.661035] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:38:37.319 [2024-07-15 07:47:15.679351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.319 [2024-07-15 07:47:15.679441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:38:37.319 [2024-07-15 07:47:15.679484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.313 ms 00:38:37.319 [2024-07-15 07:47:15.679497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.319 [2024-07-15 07:47:15.679717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.319 [2024-07-15 07:47:15.679742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:38:37.319 [2024-07-15 07:47:15.679768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:38:37.319 [2024-07-15 07:47:15.679780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.319 [2024-07-15 07:47:15.692053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.319 [2024-07-15 07:47:15.692165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:37.319 [2024-07-15 07:47:15.692201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.198 ms 00:38:37.319 [2024-07-15 07:47:15.692214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.319 [2024-07-15 07:47:15.692420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.319 [2024-07-15 07:47:15.692442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:37.319 [2024-07-15 07:47:15.692487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:38:37.319 [2024-07-15 07:47:15.692519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.319 [2024-07-15 07:47:15.692575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.319 [2024-07-15 07:47:15.692593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:38:37.319 [2024-07-15 07:47:15.692607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:38:37.319 [2024-07-15 07:47:15.692623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.319 [2024-07-15 07:47:15.692666] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:38:37.319 [2024-07-15 07:47:15.698688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.319 [2024-07-15 07:47:15.698728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:37.319 [2024-07-15 07:47:15.698745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.035 ms 00:38:37.319 [2024-07-15 07:47:15.698758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.319 [2024-07-15 07:47:15.698838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.319 [2024-07-15 07:47:15.698858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:38:37.319 [2024-07-15 07:47:15.698872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:38:37.319 [2024-07-15 07:47:15.698884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.319 [2024-07-15 07:47:15.698922] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:38:37.319 [2024-07-15 07:47:15.698959] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:38:37.319 [2024-07-15 07:47:15.699032] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:38:37.319 [2024-07-15 07:47:15.699057] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:38:37.319 [2024-07-15 07:47:15.699173] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:38:37.319 [2024-07-15 07:47:15.699190] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:38:37.319 [2024-07-15 07:47:15.699206] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:38:37.319 [2024-07-15 07:47:15.699221] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:38:37.319 [2024-07-15 07:47:15.699236] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:38:37.319 [2024-07-15 07:47:15.699249] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:38:37.319 [2024-07-15 07:47:15.699267] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:38:37.319 [2024-07-15 07:47:15.699279] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:38:37.319 [2024-07-15 07:47:15.699291] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:38:37.319 [2024-07-15 07:47:15.699303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.319 [2024-07-15 07:47:15.699315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:38:37.319 [2024-07-15 07:47:15.699327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:38:37.319 [2024-07-15 07:47:15.699339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.319 [2024-07-15 07:47:15.699437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.319 [2024-07-15 07:47:15.699470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:38:37.319 [2024-07-15 07:47:15.699486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:38:37.319 [2024-07-15 07:47:15.699503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.319 [2024-07-15 07:47:15.699619] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:38:37.319 [2024-07-15 07:47:15.699653] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:38:37.319 [2024-07-15 07:47:15.699667] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:37.320 [2024-07-15 07:47:15.699679] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:37.320 [2024-07-15 07:47:15.699693] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:38:37.320 [2024-07-15 07:47:15.699704] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:38:37.320 [2024-07-15 07:47:15.699716] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:38:37.320 [2024-07-15 07:47:15.699728] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:38:37.320 [2024-07-15 07:47:15.699739] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:38:37.320 [2024-07-15 07:47:15.699750] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:37.320 [2024-07-15 07:47:15.699761] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:38:37.320 [2024-07-15 07:47:15.699771] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:38:37.320 [2024-07-15 07:47:15.699782] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:37.320 [2024-07-15 07:47:15.699793] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:38:37.320 [2024-07-15 07:47:15.699804] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:38:37.320 [2024-07-15 07:47:15.699815] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:37.320 [2024-07-15 07:47:15.699826] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:38:37.320 [2024-07-15 07:47:15.699837] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:38:37.320 [2024-07-15 07:47:15.699865] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:37.320 [2024-07-15 07:47:15.699876] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:38:37.320 [2024-07-15 07:47:15.699887] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:38:37.320 [2024-07-15 07:47:15.699898] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:37.320 [2024-07-15 07:47:15.699908] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:38:37.320 [2024-07-15 07:47:15.699919] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:38:37.320 [2024-07-15 07:47:15.699930] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:37.320 [2024-07-15 07:47:15.699941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:38:37.320 [2024-07-15 07:47:15.699952] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:38:37.320 [2024-07-15 07:47:15.699963] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:37.320 [2024-07-15 07:47:15.699973] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:38:37.320 [2024-07-15 07:47:15.699983] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:38:37.320 [2024-07-15 07:47:15.699994] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:37.320 [2024-07-15 07:47:15.700005] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:38:37.320 [2024-07-15 07:47:15.700015] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:38:37.320 [2024-07-15 07:47:15.700026] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:37.320 [2024-07-15 07:47:15.700037] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:38:37.320 [2024-07-15 07:47:15.700047] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:38:37.320 [2024-07-15 07:47:15.700058] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:37.320 [2024-07-15 07:47:15.700069] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:38:37.320 [2024-07-15 07:47:15.700080] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:38:37.320 [2024-07-15 07:47:15.700092] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:37.320 [2024-07-15 07:47:15.700103] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:38:37.320 [2024-07-15 07:47:15.700114] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:38:37.320 [2024-07-15 07:47:15.700125] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:37.320 [2024-07-15 07:47:15.700136] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:38:37.320 [2024-07-15 07:47:15.700148] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:38:37.320 [2024-07-15 07:47:15.700160] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:37.320 [2024-07-15 07:47:15.700172] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:37.320 [2024-07-15 07:47:15.700184] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:38:37.320 [2024-07-15 07:47:15.700195] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:38:37.320 [2024-07-15 07:47:15.700206] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:38:37.320 [2024-07-15 07:47:15.700217] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:38:37.320 [2024-07-15 07:47:15.700228] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:38:37.320 [2024-07-15 07:47:15.700238] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:38:37.320 [2024-07-15 07:47:15.700252] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:38:37.320 [2024-07-15 07:47:15.700273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:37.320 [2024-07-15 07:47:15.700286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:38:37.320 [2024-07-15 07:47:15.700298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:38:37.320 [2024-07-15 07:47:15.700310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:38:37.320 [2024-07-15 07:47:15.700321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:38:37.320 [2024-07-15 07:47:15.700333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:38:37.320 [2024-07-15 07:47:15.700344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:38:37.320 [2024-07-15 07:47:15.700356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:38:37.320 [2024-07-15 07:47:15.700367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:38:37.320 [2024-07-15 07:47:15.700379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:38:37.320 [2024-07-15 07:47:15.700391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:38:37.320 [2024-07-15 07:47:15.700403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:38:37.320 [2024-07-15 07:47:15.700414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:38:37.320 [2024-07-15 07:47:15.700425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:38:37.320 [2024-07-15 07:47:15.700437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:38:37.320 [2024-07-15 07:47:15.700448] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:38:37.320 [2024-07-15 07:47:15.700486] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:37.320 [2024-07-15 07:47:15.700501] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:38:37.320 [2024-07-15 07:47:15.700514] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:38:37.320 [2024-07-15 07:47:15.700526] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:38:37.320 [2024-07-15 07:47:15.700538] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:38:37.320 [2024-07-15 07:47:15.700551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.320 [2024-07-15 07:47:15.700564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:38:37.320 [2024-07-15 07:47:15.700577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.997 ms 00:38:37.320 [2024-07-15 07:47:15.700589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.320 [2024-07-15 07:47:15.761500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.320 [2024-07-15 07:47:15.761583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:37.320 [2024-07-15 07:47:15.761608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.818 ms 00:38:37.320 [2024-07-15 07:47:15.761622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.320 [2024-07-15 07:47:15.761891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.320 [2024-07-15 07:47:15.761925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:38:37.320 [2024-07-15 07:47:15.761942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:38:37.320 [2024-07-15 07:47:15.761962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.320 [2024-07-15 07:47:15.810082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.320 [2024-07-15 07:47:15.810165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:37.320 [2024-07-15 07:47:15.810188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.078 ms 00:38:37.320 [2024-07-15 07:47:15.810201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.320 [2024-07-15 07:47:15.810370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.320 [2024-07-15 07:47:15.810391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:37.320 [2024-07-15 07:47:15.810406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:38:37.320 [2024-07-15 07:47:15.810419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.320 [2024-07-15 07:47:15.811189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.320 [2024-07-15 07:47:15.811220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:37.320 [2024-07-15 07:47:15.811236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.720 ms 00:38:37.320 [2024-07-15 07:47:15.811248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.320 [2024-07-15 07:47:15.811473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.320 [2024-07-15 07:47:15.811507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:37.320 [2024-07-15 07:47:15.811522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:38:37.320 [2024-07-15 07:47:15.811535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.320 [2024-07-15 07:47:15.832582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.320 [2024-07-15 07:47:15.832665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:37.320 [2024-07-15 07:47:15.832688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.009 ms 00:38:37.320 [2024-07-15 07:47:15.832711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.320 [2024-07-15 07:47:15.850545] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:38:37.320 [2024-07-15 07:47:15.850630] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:38:37.321 [2024-07-15 07:47:15.850654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.321 [2024-07-15 07:47:15.850669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:38:37.321 [2024-07-15 07:47:15.850687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.706 ms 00:38:37.321 [2024-07-15 07:47:15.850701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.321 [2024-07-15 07:47:15.881970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.321 [2024-07-15 07:47:15.882104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:38:37.321 [2024-07-15 07:47:15.882130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.097 ms 00:38:37.321 [2024-07-15 07:47:15.882143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.321 [2024-07-15 07:47:15.900764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.321 [2024-07-15 07:47:15.900867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:38:37.321 [2024-07-15 07:47:15.900890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.388 ms 00:38:37.321 [2024-07-15 07:47:15.900904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.321 [2024-07-15 07:47:15.919919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.321 [2024-07-15 07:47:15.920032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:38:37.321 [2024-07-15 07:47:15.920056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.807 ms 00:38:37.321 [2024-07-15 07:47:15.920069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.321 [2024-07-15 07:47:15.921236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.321 [2024-07-15 07:47:15.921284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:38:37.321 [2024-07-15 07:47:15.921302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.894 ms 00:38:37.321 [2024-07-15 07:47:15.921314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.579 [2024-07-15 07:47:16.012520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.579 [2024-07-15 07:47:16.012636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:38:37.579 [2024-07-15 07:47:16.012660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.164 ms 00:38:37.579 [2024-07-15 07:47:16.012674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.579 [2024-07-15 07:47:16.031109] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:38:37.579 [2024-07-15 07:47:16.060337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.579 [2024-07-15 07:47:16.060438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:38:37.579 [2024-07-15 07:47:16.060474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.445 ms 00:38:37.579 [2024-07-15 07:47:16.060490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.579 [2024-07-15 07:47:16.060681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.579 [2024-07-15 07:47:16.060703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:38:37.579 [2024-07-15 07:47:16.060726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:38:37.579 [2024-07-15 07:47:16.060745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.579 [2024-07-15 07:47:16.060873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.579 [2024-07-15 07:47:16.060907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:38:37.579 [2024-07-15 07:47:16.060923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:38:37.579 [2024-07-15 07:47:16.060935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.579 [2024-07-15 07:47:16.060981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.579 [2024-07-15 07:47:16.061003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:38:37.579 [2024-07-15 07:47:16.061024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:38:37.579 [2024-07-15 07:47:16.061053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.579 [2024-07-15 07:47:16.061113] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:38:37.579 [2024-07-15 07:47:16.061139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.579 [2024-07-15 07:47:16.061153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:38:37.579 [2024-07-15 07:47:16.061170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:38:37.579 [2024-07-15 07:47:16.061189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.579 [2024-07-15 07:47:16.096167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.579 [2024-07-15 07:47:16.096269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:38:37.579 [2024-07-15 07:47:16.096308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.933 ms 00:38:37.579 [2024-07-15 07:47:16.096322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.579 [2024-07-15 07:47:16.096584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:37.579 [2024-07-15 07:47:16.096611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:38:37.579 [2024-07-15 07:47:16.096627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:38:37.579 [2024-07-15 07:47:16.096645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:37.579 [2024-07-15 07:47:16.098341] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:38:37.580 [2024-07-15 07:47:16.103520] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 444.828 ms, result 0 00:38:37.580 [2024-07-15 07:47:16.104377] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:38:37.580 [2024-07-15 07:47:16.121341] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:38:49.093  Copying: 25/256 [MB] (25 MBps) Copying: 48/256 [MB] (23 MBps) Copying: 70/256 [MB] (21 MBps) Copying: 93/256 [MB] (22 MBps) Copying: 117/256 [MB] (23 MBps) Copying: 141/256 [MB] (24 MBps) Copying: 164/256 [MB] (23 MBps) Copying: 188/256 [MB] (23 MBps) Copying: 211/256 [MB] (22 MBps) Copying: 234/256 [MB] (22 MBps) Copying: 256/256 [MB] (average 23 MBps)[2024-07-15 07:47:27.500899] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:38:49.093 [2024-07-15 07:47:27.516257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:49.093 [2024-07-15 07:47:27.516362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:38:49.093 [2024-07-15 07:47:27.516403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:38:49.093 [2024-07-15 07:47:27.516423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:49.093 [2024-07-15 07:47:27.516495] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:38:49.093 [2024-07-15 07:47:27.521048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:49.093 [2024-07-15 07:47:27.521121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:38:49.093 [2024-07-15 07:47:27.521152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.512 ms 00:38:49.093 [2024-07-15 07:47:27.521173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:49.093 [2024-07-15 07:47:27.521615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:49.093 [2024-07-15 07:47:27.521655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:38:49.093 [2024-07-15 07:47:27.521680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:38:49.093 [2024-07-15 07:47:27.521703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:49.093 [2024-07-15 07:47:27.525464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:49.093 [2024-07-15 07:47:27.525521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:38:49.093 [2024-07-15 07:47:27.525552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.720 ms 00:38:49.093 [2024-07-15 07:47:27.525585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:49.093 [2024-07-15 07:47:27.534071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:49.093 [2024-07-15 07:47:27.534144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:38:49.093 [2024-07-15 07:47:27.534173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.434 ms 00:38:49.093 [2024-07-15 07:47:27.534194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:49.093 [2024-07-15 07:47:27.568062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:49.093 [2024-07-15 07:47:27.568162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:38:49.093 [2024-07-15 07:47:27.568197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.695 ms 00:38:49.093 [2024-07-15 07:47:27.568216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:49.093 [2024-07-15 07:47:27.587309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:49.093 [2024-07-15 07:47:27.587406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:38:49.093 [2024-07-15 07:47:27.587440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.948 ms 00:38:49.093 [2024-07-15 07:47:27.587475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:49.093 [2024-07-15 07:47:27.587804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:49.093 [2024-07-15 07:47:27.587846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:38:49.093 [2024-07-15 07:47:27.587873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:38:49.093 [2024-07-15 07:47:27.587894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:49.093 [2024-07-15 07:47:27.619612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:49.093 [2024-07-15 07:47:27.619712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:38:49.093 [2024-07-15 07:47:27.619747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.674 ms 00:38:49.093 [2024-07-15 07:47:27.619767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:49.093 [2024-07-15 07:47:27.652204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:49.093 [2024-07-15 07:47:27.652332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:38:49.093 [2024-07-15 07:47:27.652370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.280 ms 00:38:49.093 [2024-07-15 07:47:27.652390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:49.093 [2024-07-15 07:47:27.688591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:49.093 [2024-07-15 07:47:27.688698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:38:49.093 [2024-07-15 07:47:27.688733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.979 ms 00:38:49.093 [2024-07-15 07:47:27.688753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:49.355 [2024-07-15 07:47:27.721987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:49.355 [2024-07-15 07:47:27.722095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:38:49.355 [2024-07-15 07:47:27.722131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.975 ms 00:38:49.355 [2024-07-15 07:47:27.722150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:49.355 [2024-07-15 07:47:27.722322] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:38:49.355 [2024-07-15 07:47:27.722370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:38:49.355 [2024-07-15 07:47:27.722423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:38:49.355 [2024-07-15 07:47:27.722448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:38:49.355 [2024-07-15 07:47:27.722489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:38:49.355 [2024-07-15 07:47:27.722515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:38:49.355 [2024-07-15 07:47:27.722544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:38:49.355 [2024-07-15 07:47:27.722567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:38:49.355 [2024-07-15 07:47:27.722589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:38:49.355 [2024-07-15 07:47:27.722611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:38:49.355 [2024-07-15 07:47:27.722633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:38:49.355 [2024-07-15 07:47:27.722657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:38:49.355 [2024-07-15 07:47:27.722680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:38:49.355 [2024-07-15 07:47:27.722702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:38:49.355 [2024-07-15 07:47:27.722724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:38:49.355 [2024-07-15 07:47:27.722747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:38:49.355 [2024-07-15 07:47:27.722770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:38:49.355 [2024-07-15 07:47:27.722793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:38:49.355 [2024-07-15 07:47:27.722817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:38:49.355 [2024-07-15 07:47:27.722848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:38:49.355 [2024-07-15 07:47:27.722869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:38:49.355 [2024-07-15 07:47:27.722892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:38:49.355 [2024-07-15 07:47:27.722916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:38:49.355 [2024-07-15 07:47:27.722939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:38:49.355 [2024-07-15 07:47:27.722962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:38:49.355 [2024-07-15 07:47:27.722999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:38:49.355 [2024-07-15 07:47:27.723022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:38:49.355 [2024-07-15 07:47:27.723044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:38:49.355 [2024-07-15 07:47:27.723076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:38:49.355 [2024-07-15 07:47:27.723098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:38:49.355 [2024-07-15 07:47:27.723122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.723987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.724008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.724029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.724053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.724075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.724097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.724115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.724137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.724159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.724182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.724205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.724226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.724248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.724271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.724294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.724315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.724335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.724354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.724386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.724408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.724430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.724469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.724497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.724520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.724544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.724567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.724593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.724616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.724640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.724664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.724686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.724708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.724731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:38:49.356 [2024-07-15 07:47:27.724767] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:38:49.356 [2024-07-15 07:47:27.724795] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d442620d-548b-4e89-8b2c-9e30b59e312d 00:38:49.356 [2024-07-15 07:47:27.724819] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:38:49.356 [2024-07-15 07:47:27.724837] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:38:49.356 [2024-07-15 07:47:27.724873] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:38:49.356 [2024-07-15 07:47:27.724889] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:38:49.356 [2024-07-15 07:47:27.724904] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:38:49.356 [2024-07-15 07:47:27.724923] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:38:49.356 [2024-07-15 07:47:27.724943] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:38:49.356 [2024-07-15 07:47:27.724962] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:38:49.356 [2024-07-15 07:47:27.724980] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:38:49.356 [2024-07-15 07:47:27.725001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:49.356 [2024-07-15 07:47:27.725023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:38:49.356 [2024-07-15 07:47:27.725053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.683 ms 00:38:49.356 [2024-07-15 07:47:27.725087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:49.356 [2024-07-15 07:47:27.743965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:49.356 [2024-07-15 07:47:27.744056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:38:49.356 [2024-07-15 07:47:27.744102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.820 ms 00:38:49.356 [2024-07-15 07:47:27.744122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:49.356 [2024-07-15 07:47:27.744809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:49.356 [2024-07-15 07:47:27.744846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:38:49.356 [2024-07-15 07:47:27.744885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:38:49.356 [2024-07-15 07:47:27.744907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:49.356 [2024-07-15 07:47:27.789010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:49.356 [2024-07-15 07:47:27.789117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:49.356 [2024-07-15 07:47:27.789150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:49.356 [2024-07-15 07:47:27.789170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:49.356 [2024-07-15 07:47:27.789385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:49.357 [2024-07-15 07:47:27.789432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:49.357 [2024-07-15 07:47:27.789486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:49.357 [2024-07-15 07:47:27.789509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:49.357 [2024-07-15 07:47:27.789622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:49.357 [2024-07-15 07:47:27.789659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:49.357 [2024-07-15 07:47:27.789684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:49.357 [2024-07-15 07:47:27.789704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:49.357 [2024-07-15 07:47:27.789741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:49.357 [2024-07-15 07:47:27.789761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:49.357 [2024-07-15 07:47:27.789777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:49.357 [2024-07-15 07:47:27.789806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:49.357 [2024-07-15 07:47:27.910907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:49.357 [2024-07-15 07:47:27.911051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:49.357 [2024-07-15 07:47:27.911090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:49.357 [2024-07-15 07:47:27.911110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:49.638 [2024-07-15 07:47:28.003885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:49.638 [2024-07-15 07:47:28.004002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:49.638 [2024-07-15 07:47:28.004037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:49.638 [2024-07-15 07:47:28.004068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:49.638 [2024-07-15 07:47:28.004208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:49.638 [2024-07-15 07:47:28.004240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:49.638 [2024-07-15 07:47:28.004263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:49.638 [2024-07-15 07:47:28.004284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:49.638 [2024-07-15 07:47:28.004347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:49.638 [2024-07-15 07:47:28.004376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:49.638 [2024-07-15 07:47:28.004397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:49.638 [2024-07-15 07:47:28.004419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:49.638 [2024-07-15 07:47:28.004640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:49.638 [2024-07-15 07:47:28.004679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:49.638 [2024-07-15 07:47:28.004704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:49.638 [2024-07-15 07:47:28.004725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:49.638 [2024-07-15 07:47:28.004813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:49.638 [2024-07-15 07:47:28.004842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:38:49.638 [2024-07-15 07:47:28.004866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:49.638 [2024-07-15 07:47:28.004889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:49.638 [2024-07-15 07:47:28.004983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:49.638 [2024-07-15 07:47:28.005027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:49.638 [2024-07-15 07:47:28.005051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:49.638 [2024-07-15 07:47:28.005072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:49.638 [2024-07-15 07:47:28.005169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:49.638 [2024-07-15 07:47:28.005206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:49.638 [2024-07-15 07:47:28.005232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:49.638 [2024-07-15 07:47:28.005253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:49.638 [2024-07-15 07:47:28.005538] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 489.269 ms, result 0 00:38:51.011 00:38:51.011 00:38:51.011 07:47:29 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:38:51.268 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:38:51.268 07:47:29 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:38:51.268 07:47:29 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:38:51.268 07:47:29 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:38:51.268 07:47:29 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:38:51.268 07:47:29 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:38:51.525 07:47:29 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:38:51.525 Process with pid 81613 is not found 00:38:51.525 07:47:29 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 81613 00:38:51.525 07:47:29 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 81613 ']' 00:38:51.525 07:47:29 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 81613 00:38:51.525 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (81613) - No such process 00:38:51.525 07:47:29 ftl.ftl_trim -- common/autotest_common.sh@975 -- # echo 'Process with pid 81613 is not found' 00:38:51.525 00:38:51.525 real 1m14.381s 00:38:51.525 user 1m39.577s 00:38:51.525 sys 0m8.737s 00:38:51.525 07:47:29 ftl.ftl_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:51.525 ************************************ 00:38:51.525 END TEST ftl_trim 00:38:51.525 ************************************ 00:38:51.525 07:47:29 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:38:51.525 07:47:29 ftl -- common/autotest_common.sh@1142 -- # return 0 00:38:51.525 07:47:29 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:38:51.525 07:47:29 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:38:51.525 07:47:29 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:51.525 07:47:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:38:51.525 ************************************ 00:38:51.525 START TEST ftl_restore 00:38:51.525 ************************************ 00:38:51.525 07:47:30 ftl.ftl_restore -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:38:51.525 * Looking for test storage... 00:38:51.525 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.0P5E6XhWMR 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=81898 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 81898 00:38:51.525 07:47:30 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:51.525 07:47:30 ftl.ftl_restore -- common/autotest_common.sh@829 -- # '[' -z 81898 ']' 00:38:51.525 07:47:30 ftl.ftl_restore -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:51.525 07:47:30 ftl.ftl_restore -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:51.525 07:47:30 ftl.ftl_restore -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:51.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:51.525 07:47:30 ftl.ftl_restore -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:51.525 07:47:30 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:38:51.782 [2024-07-15 07:47:30.259780] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:38:51.782 [2024-07-15 07:47:30.260078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81898 ] 00:38:52.039 [2024-07-15 07:47:30.454751] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:52.295 [2024-07-15 07:47:30.730476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:53.237 07:47:31 ftl.ftl_restore -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:53.237 07:47:31 ftl.ftl_restore -- common/autotest_common.sh@862 -- # return 0 00:38:53.237 07:47:31 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:38:53.237 07:47:31 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:38:53.237 07:47:31 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:38:53.237 07:47:31 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:38:53.237 07:47:31 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:38:53.237 07:47:31 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:38:53.496 07:47:32 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:38:53.496 07:47:32 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:38:53.496 07:47:32 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:38:53.496 07:47:32 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:38:53.496 07:47:32 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:38:53.496 07:47:32 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:38:53.496 07:47:32 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:38:53.496 07:47:32 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:38:53.754 07:47:32 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:38:53.754 { 00:38:53.754 "name": "nvme0n1", 00:38:53.754 "aliases": [ 00:38:53.754 "3bbb74f5-284d-4106-b2fe-00c28d0f0d4d" 00:38:53.754 ], 00:38:53.754 "product_name": "NVMe disk", 00:38:53.754 "block_size": 4096, 00:38:53.754 "num_blocks": 1310720, 00:38:53.754 "uuid": "3bbb74f5-284d-4106-b2fe-00c28d0f0d4d", 00:38:53.754 "assigned_rate_limits": { 00:38:53.754 "rw_ios_per_sec": 0, 00:38:53.754 "rw_mbytes_per_sec": 0, 00:38:53.754 "r_mbytes_per_sec": 0, 00:38:53.754 "w_mbytes_per_sec": 0 00:38:53.754 }, 00:38:53.754 "claimed": true, 00:38:53.754 "claim_type": "read_many_write_one", 00:38:53.754 "zoned": false, 00:38:53.754 "supported_io_types": { 00:38:53.754 "read": true, 00:38:53.754 "write": true, 00:38:53.754 "unmap": true, 00:38:53.754 "flush": true, 00:38:53.754 "reset": true, 00:38:53.754 "nvme_admin": true, 00:38:53.754 "nvme_io": true, 00:38:53.754 "nvme_io_md": false, 00:38:53.754 "write_zeroes": true, 00:38:53.754 "zcopy": false, 00:38:53.754 "get_zone_info": false, 00:38:53.754 "zone_management": false, 00:38:53.754 "zone_append": false, 00:38:53.754 "compare": true, 00:38:53.754 "compare_and_write": false, 00:38:53.754 "abort": true, 00:38:53.754 "seek_hole": false, 00:38:53.754 "seek_data": false, 00:38:53.754 "copy": true, 00:38:53.754 "nvme_iov_md": false 00:38:53.754 }, 00:38:53.754 "driver_specific": { 00:38:53.754 "nvme": [ 00:38:53.754 { 00:38:53.754 "pci_address": "0000:00:11.0", 00:38:53.754 "trid": { 00:38:53.754 "trtype": "PCIe", 00:38:53.754 "traddr": "0000:00:11.0" 00:38:53.754 }, 00:38:53.754 "ctrlr_data": { 00:38:53.754 "cntlid": 0, 00:38:53.754 "vendor_id": "0x1b36", 00:38:53.754 "model_number": "QEMU NVMe Ctrl", 00:38:53.754 "serial_number": "12341", 00:38:53.754 "firmware_revision": "8.0.0", 00:38:53.754 "subnqn": "nqn.2019-08.org.qemu:12341", 00:38:53.754 "oacs": { 00:38:53.754 "security": 0, 00:38:53.754 "format": 1, 00:38:53.754 "firmware": 0, 00:38:53.754 "ns_manage": 1 00:38:53.754 }, 00:38:53.754 "multi_ctrlr": false, 00:38:53.754 "ana_reporting": false 00:38:53.754 }, 00:38:53.754 "vs": { 00:38:53.754 "nvme_version": "1.4" 00:38:53.754 }, 00:38:53.754 "ns_data": { 00:38:53.754 "id": 1, 00:38:53.754 "can_share": false 00:38:53.754 } 00:38:53.754 } 00:38:53.754 ], 00:38:53.754 "mp_policy": "active_passive" 00:38:53.754 } 00:38:53.754 } 00:38:53.754 ]' 00:38:53.754 07:47:32 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:38:54.012 07:47:32 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:38:54.012 07:47:32 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:38:54.012 07:47:32 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:38:54.012 07:47:32 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:38:54.012 07:47:32 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:38:54.012 07:47:32 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:38:54.012 07:47:32 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:38:54.012 07:47:32 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:38:54.012 07:47:32 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:38:54.012 07:47:32 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:38:54.270 07:47:32 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=0d71380a-bf2a-481a-a225-94463ceff5fb 00:38:54.270 07:47:32 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:38:54.270 07:47:32 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0d71380a-bf2a-481a-a225-94463ceff5fb 00:38:54.528 07:47:32 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:38:54.785 07:47:33 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=39ff8240-741b-4d59-9b43-6090d24c275e 00:38:54.785 07:47:33 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 39ff8240-741b-4d59-9b43-6090d24c275e 00:38:55.041 07:47:33 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=ddd53b2c-5d14-4719-a3c4-48b52be2be65 00:38:55.041 07:47:33 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:38:55.041 07:47:33 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ddd53b2c-5d14-4719-a3c4-48b52be2be65 00:38:55.041 07:47:33 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:38:55.041 07:47:33 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:38:55.041 07:47:33 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=ddd53b2c-5d14-4719-a3c4-48b52be2be65 00:38:55.041 07:47:33 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:38:55.041 07:47:33 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size ddd53b2c-5d14-4719-a3c4-48b52be2be65 00:38:55.041 07:47:33 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=ddd53b2c-5d14-4719-a3c4-48b52be2be65 00:38:55.041 07:47:33 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:38:55.041 07:47:33 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:38:55.041 07:47:33 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:38:55.041 07:47:33 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ddd53b2c-5d14-4719-a3c4-48b52be2be65 00:38:55.298 07:47:33 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:38:55.298 { 00:38:55.298 "name": "ddd53b2c-5d14-4719-a3c4-48b52be2be65", 00:38:55.298 "aliases": [ 00:38:55.298 "lvs/nvme0n1p0" 00:38:55.298 ], 00:38:55.298 "product_name": "Logical Volume", 00:38:55.298 "block_size": 4096, 00:38:55.298 "num_blocks": 26476544, 00:38:55.298 "uuid": "ddd53b2c-5d14-4719-a3c4-48b52be2be65", 00:38:55.298 "assigned_rate_limits": { 00:38:55.298 "rw_ios_per_sec": 0, 00:38:55.298 "rw_mbytes_per_sec": 0, 00:38:55.298 "r_mbytes_per_sec": 0, 00:38:55.298 "w_mbytes_per_sec": 0 00:38:55.298 }, 00:38:55.298 "claimed": false, 00:38:55.298 "zoned": false, 00:38:55.298 "supported_io_types": { 00:38:55.298 "read": true, 00:38:55.298 "write": true, 00:38:55.298 "unmap": true, 00:38:55.298 "flush": false, 00:38:55.298 "reset": true, 00:38:55.298 "nvme_admin": false, 00:38:55.298 "nvme_io": false, 00:38:55.298 "nvme_io_md": false, 00:38:55.299 "write_zeroes": true, 00:38:55.299 "zcopy": false, 00:38:55.299 "get_zone_info": false, 00:38:55.299 "zone_management": false, 00:38:55.299 "zone_append": false, 00:38:55.299 "compare": false, 00:38:55.299 "compare_and_write": false, 00:38:55.299 "abort": false, 00:38:55.299 "seek_hole": true, 00:38:55.299 "seek_data": true, 00:38:55.299 "copy": false, 00:38:55.299 "nvme_iov_md": false 00:38:55.299 }, 00:38:55.299 "driver_specific": { 00:38:55.299 "lvol": { 00:38:55.299 "lvol_store_uuid": "39ff8240-741b-4d59-9b43-6090d24c275e", 00:38:55.299 "base_bdev": "nvme0n1", 00:38:55.299 "thin_provision": true, 00:38:55.299 "num_allocated_clusters": 0, 00:38:55.299 "snapshot": false, 00:38:55.299 "clone": false, 00:38:55.299 "esnap_clone": false 00:38:55.299 } 00:38:55.299 } 00:38:55.299 } 00:38:55.299 ]' 00:38:55.299 07:47:33 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:38:55.299 07:47:33 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:38:55.299 07:47:33 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:38:55.299 07:47:33 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:38:55.299 07:47:33 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:38:55.299 07:47:33 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:38:55.299 07:47:33 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:38:55.299 07:47:33 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:38:55.299 07:47:33 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:38:55.556 07:47:34 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:38:55.556 07:47:34 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:38:55.556 07:47:34 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size ddd53b2c-5d14-4719-a3c4-48b52be2be65 00:38:55.556 07:47:34 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=ddd53b2c-5d14-4719-a3c4-48b52be2be65 00:38:55.556 07:47:34 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:38:55.556 07:47:34 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:38:55.556 07:47:34 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:38:55.556 07:47:34 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ddd53b2c-5d14-4719-a3c4-48b52be2be65 00:38:55.849 07:47:34 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:38:55.849 { 00:38:55.849 "name": "ddd53b2c-5d14-4719-a3c4-48b52be2be65", 00:38:55.849 "aliases": [ 00:38:55.849 "lvs/nvme0n1p0" 00:38:55.849 ], 00:38:55.849 "product_name": "Logical Volume", 00:38:55.849 "block_size": 4096, 00:38:55.849 "num_blocks": 26476544, 00:38:55.849 "uuid": "ddd53b2c-5d14-4719-a3c4-48b52be2be65", 00:38:55.849 "assigned_rate_limits": { 00:38:55.849 "rw_ios_per_sec": 0, 00:38:55.849 "rw_mbytes_per_sec": 0, 00:38:55.849 "r_mbytes_per_sec": 0, 00:38:55.849 "w_mbytes_per_sec": 0 00:38:55.849 }, 00:38:55.849 "claimed": false, 00:38:55.849 "zoned": false, 00:38:55.849 "supported_io_types": { 00:38:55.849 "read": true, 00:38:55.849 "write": true, 00:38:55.849 "unmap": true, 00:38:55.849 "flush": false, 00:38:55.849 "reset": true, 00:38:55.849 "nvme_admin": false, 00:38:55.849 "nvme_io": false, 00:38:55.849 "nvme_io_md": false, 00:38:55.849 "write_zeroes": true, 00:38:55.849 "zcopy": false, 00:38:55.849 "get_zone_info": false, 00:38:55.849 "zone_management": false, 00:38:55.849 "zone_append": false, 00:38:55.849 "compare": false, 00:38:55.849 "compare_and_write": false, 00:38:55.849 "abort": false, 00:38:55.849 "seek_hole": true, 00:38:55.849 "seek_data": true, 00:38:55.849 "copy": false, 00:38:55.849 "nvme_iov_md": false 00:38:55.849 }, 00:38:55.849 "driver_specific": { 00:38:55.849 "lvol": { 00:38:55.849 "lvol_store_uuid": "39ff8240-741b-4d59-9b43-6090d24c275e", 00:38:55.849 "base_bdev": "nvme0n1", 00:38:55.849 "thin_provision": true, 00:38:55.849 "num_allocated_clusters": 0, 00:38:55.850 "snapshot": false, 00:38:55.850 "clone": false, 00:38:55.850 "esnap_clone": false 00:38:55.850 } 00:38:55.850 } 00:38:55.850 } 00:38:55.850 ]' 00:38:55.850 07:47:34 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:38:55.850 07:47:34 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:38:55.850 07:47:34 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:38:56.108 07:47:34 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:38:56.108 07:47:34 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:38:56.108 07:47:34 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:38:56.108 07:47:34 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:38:56.108 07:47:34 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:38:56.366 07:47:34 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:38:56.366 07:47:34 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size ddd53b2c-5d14-4719-a3c4-48b52be2be65 00:38:56.366 07:47:34 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=ddd53b2c-5d14-4719-a3c4-48b52be2be65 00:38:56.366 07:47:34 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:38:56.366 07:47:34 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:38:56.366 07:47:34 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:38:56.366 07:47:34 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ddd53b2c-5d14-4719-a3c4-48b52be2be65 00:38:56.625 07:47:35 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:38:56.625 { 00:38:56.625 "name": "ddd53b2c-5d14-4719-a3c4-48b52be2be65", 00:38:56.625 "aliases": [ 00:38:56.625 "lvs/nvme0n1p0" 00:38:56.625 ], 00:38:56.625 "product_name": "Logical Volume", 00:38:56.625 "block_size": 4096, 00:38:56.625 "num_blocks": 26476544, 00:38:56.625 "uuid": "ddd53b2c-5d14-4719-a3c4-48b52be2be65", 00:38:56.625 "assigned_rate_limits": { 00:38:56.625 "rw_ios_per_sec": 0, 00:38:56.625 "rw_mbytes_per_sec": 0, 00:38:56.625 "r_mbytes_per_sec": 0, 00:38:56.625 "w_mbytes_per_sec": 0 00:38:56.625 }, 00:38:56.625 "claimed": false, 00:38:56.625 "zoned": false, 00:38:56.625 "supported_io_types": { 00:38:56.625 "read": true, 00:38:56.625 "write": true, 00:38:56.625 "unmap": true, 00:38:56.625 "flush": false, 00:38:56.625 "reset": true, 00:38:56.625 "nvme_admin": false, 00:38:56.625 "nvme_io": false, 00:38:56.625 "nvme_io_md": false, 00:38:56.625 "write_zeroes": true, 00:38:56.625 "zcopy": false, 00:38:56.625 "get_zone_info": false, 00:38:56.625 "zone_management": false, 00:38:56.625 "zone_append": false, 00:38:56.625 "compare": false, 00:38:56.625 "compare_and_write": false, 00:38:56.625 "abort": false, 00:38:56.625 "seek_hole": true, 00:38:56.625 "seek_data": true, 00:38:56.625 "copy": false, 00:38:56.625 "nvme_iov_md": false 00:38:56.625 }, 00:38:56.625 "driver_specific": { 00:38:56.625 "lvol": { 00:38:56.625 "lvol_store_uuid": "39ff8240-741b-4d59-9b43-6090d24c275e", 00:38:56.625 "base_bdev": "nvme0n1", 00:38:56.625 "thin_provision": true, 00:38:56.625 "num_allocated_clusters": 0, 00:38:56.625 "snapshot": false, 00:38:56.625 "clone": false, 00:38:56.625 "esnap_clone": false 00:38:56.625 } 00:38:56.625 } 00:38:56.625 } 00:38:56.625 ]' 00:38:56.625 07:47:35 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:38:56.625 07:47:35 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:38:56.625 07:47:35 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:38:56.625 07:47:35 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:38:56.625 07:47:35 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:38:56.625 07:47:35 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:38:56.625 07:47:35 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:38:56.625 07:47:35 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d ddd53b2c-5d14-4719-a3c4-48b52be2be65 --l2p_dram_limit 10' 00:38:56.625 07:47:35 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:38:56.625 07:47:35 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:38:56.625 07:47:35 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:38:56.625 07:47:35 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:38:56.625 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:38:56.625 07:47:35 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ddd53b2c-5d14-4719-a3c4-48b52be2be65 --l2p_dram_limit 10 -c nvc0n1p0 00:38:56.884 [2024-07-15 07:47:35.369575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:56.884 [2024-07-15 07:47:35.369667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:38:56.884 [2024-07-15 07:47:35.369692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:38:56.884 [2024-07-15 07:47:35.369709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:56.884 [2024-07-15 07:47:35.369819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:56.884 [2024-07-15 07:47:35.369843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:56.884 [2024-07-15 07:47:35.369857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:38:56.884 [2024-07-15 07:47:35.369872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:56.884 [2024-07-15 07:47:35.369905] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:38:56.884 [2024-07-15 07:47:35.371019] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:38:56.884 [2024-07-15 07:47:35.371057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:56.884 [2024-07-15 07:47:35.371086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:56.884 [2024-07-15 07:47:35.371122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.161 ms 00:38:56.884 [2024-07-15 07:47:35.371137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:56.884 [2024-07-15 07:47:35.371296] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 45559ceb-2fe3-42d7-a6cd-26f4649c2042 00:38:56.884 [2024-07-15 07:47:35.373787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:56.884 [2024-07-15 07:47:35.373840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:38:56.884 [2024-07-15 07:47:35.373862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:38:56.884 [2024-07-15 07:47:35.373875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:56.884 [2024-07-15 07:47:35.388961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:56.884 [2024-07-15 07:47:35.389046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:56.884 [2024-07-15 07:47:35.389075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.992 ms 00:38:56.884 [2024-07-15 07:47:35.389088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:56.884 [2024-07-15 07:47:35.389263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:56.884 [2024-07-15 07:47:35.389300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:56.884 [2024-07-15 07:47:35.389318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:38:56.884 [2024-07-15 07:47:35.389331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:56.884 [2024-07-15 07:47:35.389478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:56.884 [2024-07-15 07:47:35.389507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:38:56.884 [2024-07-15 07:47:35.389526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:38:56.884 [2024-07-15 07:47:35.389542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:56.884 [2024-07-15 07:47:35.389589] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:38:56.884 [2024-07-15 07:47:35.395965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:56.884 [2024-07-15 07:47:35.396055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:56.884 [2024-07-15 07:47:35.396075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.394 ms 00:38:56.884 [2024-07-15 07:47:35.396093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:56.884 [2024-07-15 07:47:35.396148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:56.884 [2024-07-15 07:47:35.396170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:38:56.884 [2024-07-15 07:47:35.396184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:38:56.884 [2024-07-15 07:47:35.396199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:56.884 [2024-07-15 07:47:35.396258] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:38:56.884 [2024-07-15 07:47:35.396439] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:38:56.884 [2024-07-15 07:47:35.396482] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:38:56.884 [2024-07-15 07:47:35.396510] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:38:56.884 [2024-07-15 07:47:35.396527] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:38:56.884 [2024-07-15 07:47:35.396545] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:38:56.884 [2024-07-15 07:47:35.396558] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:38:56.884 [2024-07-15 07:47:35.396583] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:38:56.884 [2024-07-15 07:47:35.396598] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:38:56.884 [2024-07-15 07:47:35.396615] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:38:56.884 [2024-07-15 07:47:35.396628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:56.884 [2024-07-15 07:47:35.396643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:38:56.884 [2024-07-15 07:47:35.396655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:38:56.884 [2024-07-15 07:47:35.396670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:56.884 [2024-07-15 07:47:35.396767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:56.884 [2024-07-15 07:47:35.396787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:38:56.884 [2024-07-15 07:47:35.396799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:38:56.884 [2024-07-15 07:47:35.396813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:56.884 [2024-07-15 07:47:35.396932] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:38:56.884 [2024-07-15 07:47:35.396956] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:38:56.884 [2024-07-15 07:47:35.396982] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:56.884 [2024-07-15 07:47:35.396999] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:56.884 [2024-07-15 07:47:35.397012] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:38:56.885 [2024-07-15 07:47:35.397025] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:38:56.885 [2024-07-15 07:47:35.397036] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:38:56.885 [2024-07-15 07:47:35.397050] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:38:56.885 [2024-07-15 07:47:35.397061] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:38:56.885 [2024-07-15 07:47:35.397078] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:56.885 [2024-07-15 07:47:35.397089] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:38:56.885 [2024-07-15 07:47:35.397104] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:38:56.885 [2024-07-15 07:47:35.397121] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:56.885 [2024-07-15 07:47:35.397137] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:38:56.885 [2024-07-15 07:47:35.397149] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:38:56.885 [2024-07-15 07:47:35.397162] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:56.885 [2024-07-15 07:47:35.397174] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:38:56.885 [2024-07-15 07:47:35.397191] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:38:56.885 [2024-07-15 07:47:35.397203] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:56.885 [2024-07-15 07:47:35.397217] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:38:56.885 [2024-07-15 07:47:35.397228] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:38:56.885 [2024-07-15 07:47:35.397242] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:56.885 [2024-07-15 07:47:35.397252] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:38:56.885 [2024-07-15 07:47:35.397266] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:38:56.885 [2024-07-15 07:47:35.397277] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:56.885 [2024-07-15 07:47:35.397291] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:38:56.885 [2024-07-15 07:47:35.397302] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:38:56.885 [2024-07-15 07:47:35.397315] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:56.885 [2024-07-15 07:47:35.397326] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:38:56.885 [2024-07-15 07:47:35.397340] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:38:56.885 [2024-07-15 07:47:35.397351] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:56.885 [2024-07-15 07:47:35.397365] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:38:56.885 [2024-07-15 07:47:35.397376] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:38:56.885 [2024-07-15 07:47:35.397393] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:56.885 [2024-07-15 07:47:35.397404] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:38:56.885 [2024-07-15 07:47:35.397418] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:38:56.885 [2024-07-15 07:47:35.397429] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:56.885 [2024-07-15 07:47:35.397443] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:38:56.885 [2024-07-15 07:47:35.397469] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:38:56.885 [2024-07-15 07:47:35.397488] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:56.885 [2024-07-15 07:47:35.397500] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:38:56.885 [2024-07-15 07:47:35.397517] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:38:56.885 [2024-07-15 07:47:35.397529] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:56.885 [2024-07-15 07:47:35.397542] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:38:56.885 [2024-07-15 07:47:35.397555] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:38:56.885 [2024-07-15 07:47:35.397569] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:56.885 [2024-07-15 07:47:35.397582] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:56.885 [2024-07-15 07:47:35.397597] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:38:56.885 [2024-07-15 07:47:35.397608] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:38:56.885 [2024-07-15 07:47:35.397625] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:38:56.885 [2024-07-15 07:47:35.397637] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:38:56.885 [2024-07-15 07:47:35.397651] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:38:56.885 [2024-07-15 07:47:35.397662] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:38:56.885 [2024-07-15 07:47:35.397682] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:38:56.885 [2024-07-15 07:47:35.397697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:56.885 [2024-07-15 07:47:35.397718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:38:56.885 [2024-07-15 07:47:35.397731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:38:56.885 [2024-07-15 07:47:35.397746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:38:56.885 [2024-07-15 07:47:35.397758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:38:56.885 [2024-07-15 07:47:35.397772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:38:56.885 [2024-07-15 07:47:35.397784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:38:56.885 [2024-07-15 07:47:35.397799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:38:56.885 [2024-07-15 07:47:35.397811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:38:56.885 [2024-07-15 07:47:35.397827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:38:56.885 [2024-07-15 07:47:35.397839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:38:56.885 [2024-07-15 07:47:35.397857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:38:56.885 [2024-07-15 07:47:35.397870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:38:56.885 [2024-07-15 07:47:35.397884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:38:56.885 [2024-07-15 07:47:35.397897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:38:56.885 [2024-07-15 07:47:35.397912] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:38:56.885 [2024-07-15 07:47:35.397925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:56.885 [2024-07-15 07:47:35.397942] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:38:56.885 [2024-07-15 07:47:35.397954] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:38:56.885 [2024-07-15 07:47:35.397970] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:38:56.885 [2024-07-15 07:47:35.397983] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:38:56.885 [2024-07-15 07:47:35.397999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:56.885 [2024-07-15 07:47:35.398012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:38:56.885 [2024-07-15 07:47:35.398027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.127 ms 00:38:56.885 [2024-07-15 07:47:35.398039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:56.885 [2024-07-15 07:47:35.398105] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:38:56.885 [2024-07-15 07:47:35.398134] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:38:59.410 [2024-07-15 07:47:37.901012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.410 [2024-07-15 07:47:37.901146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:38:59.410 [2024-07-15 07:47:37.901176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2502.905 ms 00:38:59.410 [2024-07-15 07:47:37.901191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.410 [2024-07-15 07:47:37.946898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.410 [2024-07-15 07:47:37.946963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:59.410 [2024-07-15 07:47:37.947010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.281 ms 00:38:59.410 [2024-07-15 07:47:37.947027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.410 [2024-07-15 07:47:37.947257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.410 [2024-07-15 07:47:37.947288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:38:59.410 [2024-07-15 07:47:37.947308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:38:59.410 [2024-07-15 07:47:37.947325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.410 [2024-07-15 07:47:37.997870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.410 [2024-07-15 07:47:37.997964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:59.410 [2024-07-15 07:47:37.998006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.479 ms 00:38:59.410 [2024-07-15 07:47:37.998038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.410 [2024-07-15 07:47:37.998121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.410 [2024-07-15 07:47:37.998146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:59.410 [2024-07-15 07:47:37.998163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:38:59.410 [2024-07-15 07:47:37.998175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.410 [2024-07-15 07:47:37.999124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.410 [2024-07-15 07:47:37.999154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:59.410 [2024-07-15 07:47:37.999177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.800 ms 00:38:59.410 [2024-07-15 07:47:37.999190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.410 [2024-07-15 07:47:37.999377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.411 [2024-07-15 07:47:37.999396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:59.411 [2024-07-15 07:47:37.999415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:38:59.411 [2024-07-15 07:47:37.999429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.411 [2024-07-15 07:47:38.023119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.411 [2024-07-15 07:47:38.023179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:59.411 [2024-07-15 07:47:38.023204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.639 ms 00:38:59.411 [2024-07-15 07:47:38.023217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.668 [2024-07-15 07:47:38.038745] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:38:59.668 [2024-07-15 07:47:38.044350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.668 [2024-07-15 07:47:38.044423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:38:59.668 [2024-07-15 07:47:38.044459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.998 ms 00:38:59.668 [2024-07-15 07:47:38.044501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.668 [2024-07-15 07:47:38.128892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.668 [2024-07-15 07:47:38.129018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:38:59.668 [2024-07-15 07:47:38.129043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.330 ms 00:38:59.668 [2024-07-15 07:47:38.129061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.668 [2024-07-15 07:47:38.129339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.668 [2024-07-15 07:47:38.129379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:38:59.668 [2024-07-15 07:47:38.129396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:38:59.668 [2024-07-15 07:47:38.129416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.668 [2024-07-15 07:47:38.160892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.668 [2024-07-15 07:47:38.161022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:38:59.668 [2024-07-15 07:47:38.161047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.373 ms 00:38:59.668 [2024-07-15 07:47:38.161065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.668 [2024-07-15 07:47:38.193075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.668 [2024-07-15 07:47:38.193180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:38:59.668 [2024-07-15 07:47:38.193220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.902 ms 00:38:59.668 [2024-07-15 07:47:38.193237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.668 [2024-07-15 07:47:38.194226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.668 [2024-07-15 07:47:38.194283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:38:59.668 [2024-07-15 07:47:38.194301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.931 ms 00:38:59.668 [2024-07-15 07:47:38.194322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.927 [2024-07-15 07:47:38.288093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.927 [2024-07-15 07:47:38.288221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:38:59.927 [2024-07-15 07:47:38.288263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.696 ms 00:38:59.927 [2024-07-15 07:47:38.288287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.927 [2024-07-15 07:47:38.323460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.927 [2024-07-15 07:47:38.323607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:38:59.927 [2024-07-15 07:47:38.323632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.099 ms 00:38:59.927 [2024-07-15 07:47:38.323648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.927 [2024-07-15 07:47:38.357325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.927 [2024-07-15 07:47:38.357413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:38:59.927 [2024-07-15 07:47:38.357449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.596 ms 00:38:59.927 [2024-07-15 07:47:38.357465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.927 [2024-07-15 07:47:38.388261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.927 [2024-07-15 07:47:38.388331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:38:59.927 [2024-07-15 07:47:38.388352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.733 ms 00:38:59.927 [2024-07-15 07:47:38.388368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.927 [2024-07-15 07:47:38.388443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.927 [2024-07-15 07:47:38.388489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:38:59.927 [2024-07-15 07:47:38.388506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:38:59.927 [2024-07-15 07:47:38.388525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.927 [2024-07-15 07:47:38.388657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.927 [2024-07-15 07:47:38.388683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:38:59.927 [2024-07-15 07:47:38.388701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:38:59.927 [2024-07-15 07:47:38.388716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.927 [2024-07-15 07:47:38.390325] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3020.177 ms, result 0 00:38:59.927 { 00:38:59.927 "name": "ftl0", 00:38:59.927 "uuid": "45559ceb-2fe3-42d7-a6cd-26f4649c2042" 00:38:59.927 } 00:38:59.927 07:47:38 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:38:59.927 07:47:38 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:39:00.185 07:47:38 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:39:00.185 07:47:38 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:39:00.444 [2024-07-15 07:47:38.921460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.444 [2024-07-15 07:47:38.921564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:39:00.444 [2024-07-15 07:47:38.921606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:39:00.444 [2024-07-15 07:47:38.921620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.444 [2024-07-15 07:47:38.921666] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:00.444 [2024-07-15 07:47:38.925780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.444 [2024-07-15 07:47:38.925823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:39:00.444 [2024-07-15 07:47:38.925840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.087 ms 00:39:00.444 [2024-07-15 07:47:38.925856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.444 [2024-07-15 07:47:38.926208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.444 [2024-07-15 07:47:38.926249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:39:00.444 [2024-07-15 07:47:38.926282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:39:00.444 [2024-07-15 07:47:38.926298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.444 [2024-07-15 07:47:38.929643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.444 [2024-07-15 07:47:38.929712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:39:00.444 [2024-07-15 07:47:38.929728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.318 ms 00:39:00.444 [2024-07-15 07:47:38.929743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.444 [2024-07-15 07:47:38.936267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.444 [2024-07-15 07:47:38.936377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:39:00.444 [2024-07-15 07:47:38.936397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.498 ms 00:39:00.444 [2024-07-15 07:47:38.936412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.444 [2024-07-15 07:47:38.971312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.444 [2024-07-15 07:47:38.971395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:39:00.444 [2024-07-15 07:47:38.971419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.773 ms 00:39:00.444 [2024-07-15 07:47:38.971435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.444 [2024-07-15 07:47:38.993421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.444 [2024-07-15 07:47:38.993523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:39:00.444 [2024-07-15 07:47:38.993547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.881 ms 00:39:00.444 [2024-07-15 07:47:38.993572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.444 [2024-07-15 07:47:38.993802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.444 [2024-07-15 07:47:38.993840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:39:00.444 [2024-07-15 07:47:38.993856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:39:00.444 [2024-07-15 07:47:38.993871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.444 [2024-07-15 07:47:39.025022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.444 [2024-07-15 07:47:39.025124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:39:00.444 [2024-07-15 07:47:39.025144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.122 ms 00:39:00.444 [2024-07-15 07:47:39.025159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.710 [2024-07-15 07:47:39.057301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.710 [2024-07-15 07:47:39.057357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:39:00.710 [2024-07-15 07:47:39.057376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.052 ms 00:39:00.710 [2024-07-15 07:47:39.057391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.710 [2024-07-15 07:47:39.089281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.710 [2024-07-15 07:47:39.089367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:39:00.710 [2024-07-15 07:47:39.089402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.812 ms 00:39:00.710 [2024-07-15 07:47:39.089417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.710 [2024-07-15 07:47:39.118375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.710 [2024-07-15 07:47:39.118440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:39:00.710 [2024-07-15 07:47:39.118484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.703 ms 00:39:00.710 [2024-07-15 07:47:39.118502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.710 [2024-07-15 07:47:39.118553] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:39:00.710 [2024-07-15 07:47:39.118583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.118598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.118613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.118625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.118640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.118669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.118684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.118696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.118715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.118728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.118743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.118755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.118770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.118782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.118802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.118814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.118829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.118841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.118856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.118868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.118885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.118898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.118912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.118924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.118942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.118955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.118970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.118992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.119998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.120013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:39:00.711 [2024-07-15 07:47:39.120027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:39:00.712 [2024-07-15 07:47:39.120042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:39:00.712 [2024-07-15 07:47:39.120055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:39:00.712 [2024-07-15 07:47:39.120070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:39:00.712 [2024-07-15 07:47:39.120082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:39:00.712 [2024-07-15 07:47:39.120097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:39:00.712 [2024-07-15 07:47:39.120109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:39:00.712 [2024-07-15 07:47:39.120125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:39:00.712 [2024-07-15 07:47:39.120137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:39:00.712 [2024-07-15 07:47:39.120162] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:39:00.712 [2024-07-15 07:47:39.120177] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 45559ceb-2fe3-42d7-a6cd-26f4649c2042 00:39:00.712 [2024-07-15 07:47:39.120193] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:39:00.712 [2024-07-15 07:47:39.120205] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:39:00.712 [2024-07-15 07:47:39.120222] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:39:00.712 [2024-07-15 07:47:39.120234] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:39:00.712 [2024-07-15 07:47:39.120248] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:39:00.712 [2024-07-15 07:47:39.120260] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:39:00.712 [2024-07-15 07:47:39.120275] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:39:00.712 [2024-07-15 07:47:39.120286] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:39:00.712 [2024-07-15 07:47:39.120299] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:39:00.712 [2024-07-15 07:47:39.120310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.712 [2024-07-15 07:47:39.120325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:39:00.712 [2024-07-15 07:47:39.120339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.759 ms 00:39:00.712 [2024-07-15 07:47:39.120353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.712 [2024-07-15 07:47:39.138860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.712 [2024-07-15 07:47:39.138924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:39:00.712 [2024-07-15 07:47:39.138958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.440 ms 00:39:00.712 [2024-07-15 07:47:39.138973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.712 [2024-07-15 07:47:39.139548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.712 [2024-07-15 07:47:39.139812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:39:00.712 [2024-07-15 07:47:39.139840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:39:00.712 [2024-07-15 07:47:39.139861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.712 [2024-07-15 07:47:39.195984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.712 [2024-07-15 07:47:39.196100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:00.712 [2024-07-15 07:47:39.196121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.712 [2024-07-15 07:47:39.196137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.712 [2024-07-15 07:47:39.196249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.712 [2024-07-15 07:47:39.196270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:00.712 [2024-07-15 07:47:39.196283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.712 [2024-07-15 07:47:39.196302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.712 [2024-07-15 07:47:39.196452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.712 [2024-07-15 07:47:39.196509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:00.712 [2024-07-15 07:47:39.196527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.712 [2024-07-15 07:47:39.196543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.712 [2024-07-15 07:47:39.196574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.712 [2024-07-15 07:47:39.196597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:00.712 [2024-07-15 07:47:39.196610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.712 [2024-07-15 07:47:39.196624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.712 [2024-07-15 07:47:39.311743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.712 [2024-07-15 07:47:39.311854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:00.712 [2024-07-15 07:47:39.311878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.712 [2024-07-15 07:47:39.311895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.970 [2024-07-15 07:47:39.403442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.970 [2024-07-15 07:47:39.403624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:00.970 [2024-07-15 07:47:39.403648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.970 [2024-07-15 07:47:39.403669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.970 [2024-07-15 07:47:39.403814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.970 [2024-07-15 07:47:39.403841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:00.970 [2024-07-15 07:47:39.403855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.971 [2024-07-15 07:47:39.403871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.971 [2024-07-15 07:47:39.403944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.971 [2024-07-15 07:47:39.403972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:00.971 [2024-07-15 07:47:39.403986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.971 [2024-07-15 07:47:39.404001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.971 [2024-07-15 07:47:39.404149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.971 [2024-07-15 07:47:39.404173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:00.971 [2024-07-15 07:47:39.404187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.971 [2024-07-15 07:47:39.404213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.971 [2024-07-15 07:47:39.404284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.971 [2024-07-15 07:47:39.404323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:39:00.971 [2024-07-15 07:47:39.404339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.971 [2024-07-15 07:47:39.404354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.971 [2024-07-15 07:47:39.404417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.971 [2024-07-15 07:47:39.404438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:00.971 [2024-07-15 07:47:39.404465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.971 [2024-07-15 07:47:39.404485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.971 [2024-07-15 07:47:39.404550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.971 [2024-07-15 07:47:39.404581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:00.971 [2024-07-15 07:47:39.404596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.971 [2024-07-15 07:47:39.404611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.971 [2024-07-15 07:47:39.404804] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 483.295 ms, result 0 00:39:00.971 true 00:39:00.971 07:47:39 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 81898 00:39:00.971 07:47:39 ftl.ftl_restore -- common/autotest_common.sh@948 -- # '[' -z 81898 ']' 00:39:00.971 07:47:39 ftl.ftl_restore -- common/autotest_common.sh@952 -- # kill -0 81898 00:39:00.971 07:47:39 ftl.ftl_restore -- common/autotest_common.sh@953 -- # uname 00:39:00.971 07:47:39 ftl.ftl_restore -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:00.971 07:47:39 ftl.ftl_restore -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81898 00:39:00.971 07:47:39 ftl.ftl_restore -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:00.971 07:47:39 ftl.ftl_restore -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:00.971 killing process with pid 81898 00:39:00.971 07:47:39 ftl.ftl_restore -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81898' 00:39:00.971 07:47:39 ftl.ftl_restore -- common/autotest_common.sh@967 -- # kill 81898 00:39:00.971 07:47:39 ftl.ftl_restore -- common/autotest_common.sh@972 -- # wait 81898 00:39:06.236 07:47:44 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:39:10.471 262144+0 records in 00:39:10.471 262144+0 records out 00:39:10.471 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.72545 s, 227 MB/s 00:39:10.471 07:47:49 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:39:13.001 07:47:51 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:39:13.001 [2024-07-15 07:47:51.417517] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:39:13.001 [2024-07-15 07:47:51.417770] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82146 ] 00:39:13.001 [2024-07-15 07:47:51.597206] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:13.568 [2024-07-15 07:47:51.895467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:13.867 [2024-07-15 07:47:52.288746] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:13.867 [2024-07-15 07:47:52.288834] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:13.867 [2024-07-15 07:47:52.454988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.867 [2024-07-15 07:47:52.455101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:39:13.867 [2024-07-15 07:47:52.455125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:39:13.867 [2024-07-15 07:47:52.455138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.867 [2024-07-15 07:47:52.455216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.867 [2024-07-15 07:47:52.455237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:13.867 [2024-07-15 07:47:52.455251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:39:13.867 [2024-07-15 07:47:52.455267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.867 [2024-07-15 07:47:52.455300] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:39:13.867 [2024-07-15 07:47:52.456228] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:39:13.867 [2024-07-15 07:47:52.456264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.867 [2024-07-15 07:47:52.456283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:13.867 [2024-07-15 07:47:52.456296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.971 ms 00:39:13.867 [2024-07-15 07:47:52.456308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.867 [2024-07-15 07:47:52.458860] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:39:13.867 [2024-07-15 07:47:52.475967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.867 [2024-07-15 07:47:52.476024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:39:13.867 [2024-07-15 07:47:52.476059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.109 ms 00:39:13.867 [2024-07-15 07:47:52.476070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:13.867 [2024-07-15 07:47:52.476144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:13.867 [2024-07-15 07:47:52.476164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:39:13.867 [2024-07-15 07:47:52.476181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:39:13.867 [2024-07-15 07:47:52.476192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:14.131 [2024-07-15 07:47:52.488572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:14.131 [2024-07-15 07:47:52.488621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:14.131 [2024-07-15 07:47:52.488639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.252 ms 00:39:14.131 [2024-07-15 07:47:52.488651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:14.131 [2024-07-15 07:47:52.488765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:14.131 [2024-07-15 07:47:52.488788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:14.131 [2024-07-15 07:47:52.488801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:39:14.131 [2024-07-15 07:47:52.488814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:14.131 [2024-07-15 07:47:52.488906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:14.131 [2024-07-15 07:47:52.488941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:39:14.131 [2024-07-15 07:47:52.488956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:39:14.131 [2024-07-15 07:47:52.488968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:14.131 [2024-07-15 07:47:52.489010] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:39:14.131 [2024-07-15 07:47:52.494696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:14.131 [2024-07-15 07:47:52.494745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:14.131 [2024-07-15 07:47:52.494777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.699 ms 00:39:14.131 [2024-07-15 07:47:52.494788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:14.131 [2024-07-15 07:47:52.494841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:14.131 [2024-07-15 07:47:52.494859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:39:14.131 [2024-07-15 07:47:52.494872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:39:14.131 [2024-07-15 07:47:52.494883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:14.131 [2024-07-15 07:47:52.494926] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:39:14.131 [2024-07-15 07:47:52.494975] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:39:14.131 [2024-07-15 07:47:52.495043] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:39:14.131 [2024-07-15 07:47:52.495073] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:39:14.131 [2024-07-15 07:47:52.495181] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:39:14.131 [2024-07-15 07:47:52.495198] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:39:14.131 [2024-07-15 07:47:52.495214] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:39:14.131 [2024-07-15 07:47:52.495230] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:39:14.131 [2024-07-15 07:47:52.495244] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:39:14.131 [2024-07-15 07:47:52.495257] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:39:14.131 [2024-07-15 07:47:52.495268] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:39:14.131 [2024-07-15 07:47:52.495280] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:39:14.131 [2024-07-15 07:47:52.495292] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:39:14.131 [2024-07-15 07:47:52.495304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:14.131 [2024-07-15 07:47:52.495321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:39:14.131 [2024-07-15 07:47:52.495340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:39:14.131 [2024-07-15 07:47:52.495357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:14.131 [2024-07-15 07:47:52.495445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:14.131 [2024-07-15 07:47:52.495474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:39:14.131 [2024-07-15 07:47:52.495487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:39:14.131 [2024-07-15 07:47:52.495499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:14.131 [2024-07-15 07:47:52.495610] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:39:14.131 [2024-07-15 07:47:52.495638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:39:14.131 [2024-07-15 07:47:52.495658] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:14.131 [2024-07-15 07:47:52.495671] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:14.131 [2024-07-15 07:47:52.495683] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:39:14.131 [2024-07-15 07:47:52.495697] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:39:14.131 [2024-07-15 07:47:52.495709] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:39:14.131 [2024-07-15 07:47:52.495721] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:39:14.131 [2024-07-15 07:47:52.495732] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:39:14.131 [2024-07-15 07:47:52.495743] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:14.131 [2024-07-15 07:47:52.495755] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:39:14.131 [2024-07-15 07:47:52.495765] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:39:14.131 [2024-07-15 07:47:52.495776] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:14.131 [2024-07-15 07:47:52.495787] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:39:14.131 [2024-07-15 07:47:52.495798] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:39:14.131 [2024-07-15 07:47:52.495808] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:14.131 [2024-07-15 07:47:52.495819] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:39:14.131 [2024-07-15 07:47:52.495830] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:39:14.131 [2024-07-15 07:47:52.495840] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:14.131 [2024-07-15 07:47:52.495851] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:39:14.131 [2024-07-15 07:47:52.495877] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:39:14.131 [2024-07-15 07:47:52.495888] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:14.131 [2024-07-15 07:47:52.495899] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:39:14.132 [2024-07-15 07:47:52.495910] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:39:14.132 [2024-07-15 07:47:52.495920] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:14.132 [2024-07-15 07:47:52.495931] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:39:14.132 [2024-07-15 07:47:52.495942] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:39:14.132 [2024-07-15 07:47:52.495953] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:14.132 [2024-07-15 07:47:52.495963] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:39:14.132 [2024-07-15 07:47:52.495974] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:39:14.132 [2024-07-15 07:47:52.495985] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:14.132 [2024-07-15 07:47:52.495995] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:39:14.132 [2024-07-15 07:47:52.496005] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:39:14.132 [2024-07-15 07:47:52.496016] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:14.132 [2024-07-15 07:47:52.496027] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:39:14.132 [2024-07-15 07:47:52.496038] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:39:14.132 [2024-07-15 07:47:52.496049] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:14.132 [2024-07-15 07:47:52.496061] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:39:14.132 [2024-07-15 07:47:52.496072] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:39:14.132 [2024-07-15 07:47:52.496083] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:14.132 [2024-07-15 07:47:52.496094] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:39:14.132 [2024-07-15 07:47:52.496105] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:39:14.132 [2024-07-15 07:47:52.496116] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:14.132 [2024-07-15 07:47:52.496127] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:39:14.132 [2024-07-15 07:47:52.496138] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:39:14.132 [2024-07-15 07:47:52.496150] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:14.132 [2024-07-15 07:47:52.496161] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:14.132 [2024-07-15 07:47:52.496173] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:39:14.132 [2024-07-15 07:47:52.496184] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:39:14.132 [2024-07-15 07:47:52.496195] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:39:14.132 [2024-07-15 07:47:52.496206] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:39:14.132 [2024-07-15 07:47:52.496216] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:39:14.132 [2024-07-15 07:47:52.496227] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:39:14.132 [2024-07-15 07:47:52.496239] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:39:14.132 [2024-07-15 07:47:52.496253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:14.132 [2024-07-15 07:47:52.496267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:39:14.132 [2024-07-15 07:47:52.496278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:39:14.132 [2024-07-15 07:47:52.496290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:39:14.132 [2024-07-15 07:47:52.496302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:39:14.132 [2024-07-15 07:47:52.496313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:39:14.132 [2024-07-15 07:47:52.496324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:39:14.132 [2024-07-15 07:47:52.496336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:39:14.132 [2024-07-15 07:47:52.496347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:39:14.132 [2024-07-15 07:47:52.496359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:39:14.132 [2024-07-15 07:47:52.496370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:39:14.132 [2024-07-15 07:47:52.496382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:39:14.132 [2024-07-15 07:47:52.496394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:39:14.132 [2024-07-15 07:47:52.496406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:39:14.132 [2024-07-15 07:47:52.496417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:39:14.132 [2024-07-15 07:47:52.496430] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:39:14.132 [2024-07-15 07:47:52.496444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:14.132 [2024-07-15 07:47:52.496472] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:39:14.132 [2024-07-15 07:47:52.496485] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:39:14.132 [2024-07-15 07:47:52.496497] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:39:14.132 [2024-07-15 07:47:52.496510] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:39:14.132 [2024-07-15 07:47:52.496523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:14.132 [2024-07-15 07:47:52.496541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:39:14.132 [2024-07-15 07:47:52.496554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.977 ms 00:39:14.132 [2024-07-15 07:47:52.496566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:14.132 [2024-07-15 07:47:52.553805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:14.132 [2024-07-15 07:47:52.553896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:14.132 [2024-07-15 07:47:52.553934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.167 ms 00:39:14.132 [2024-07-15 07:47:52.553947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:14.132 [2024-07-15 07:47:52.554092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:14.132 [2024-07-15 07:47:52.554109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:39:14.132 [2024-07-15 07:47:52.554122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:39:14.132 [2024-07-15 07:47:52.554133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:14.132 [2024-07-15 07:47:52.603439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:14.132 [2024-07-15 07:47:52.603530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:14.132 [2024-07-15 07:47:52.603556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.148 ms 00:39:14.132 [2024-07-15 07:47:52.603575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:14.132 [2024-07-15 07:47:52.603664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:14.132 [2024-07-15 07:47:52.603682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:14.132 [2024-07-15 07:47:52.603696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:39:14.132 [2024-07-15 07:47:52.603709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:14.132 [2024-07-15 07:47:52.604573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:14.132 [2024-07-15 07:47:52.604616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:14.132 [2024-07-15 07:47:52.604632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.764 ms 00:39:14.132 [2024-07-15 07:47:52.604658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:14.132 [2024-07-15 07:47:52.604875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:14.132 [2024-07-15 07:47:52.604895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:14.132 [2024-07-15 07:47:52.604908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:39:14.132 [2024-07-15 07:47:52.604920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:14.132 [2024-07-15 07:47:52.625773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:14.132 [2024-07-15 07:47:52.625889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:14.132 [2024-07-15 07:47:52.625912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.821 ms 00:39:14.132 [2024-07-15 07:47:52.625925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:14.132 [2024-07-15 07:47:52.644142] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:39:14.132 [2024-07-15 07:47:52.644272] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:39:14.132 [2024-07-15 07:47:52.644302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:14.132 [2024-07-15 07:47:52.644317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:39:14.132 [2024-07-15 07:47:52.644335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.136 ms 00:39:14.132 [2024-07-15 07:47:52.644347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:14.132 [2024-07-15 07:47:52.675148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:14.132 [2024-07-15 07:47:52.675242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:39:14.132 [2024-07-15 07:47:52.675264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.714 ms 00:39:14.132 [2024-07-15 07:47:52.675278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:14.132 [2024-07-15 07:47:52.691190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:14.132 [2024-07-15 07:47:52.691233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:39:14.133 [2024-07-15 07:47:52.691251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.803 ms 00:39:14.133 [2024-07-15 07:47:52.691263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:14.133 [2024-07-15 07:47:52.706325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:14.133 [2024-07-15 07:47:52.706375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:39:14.133 [2024-07-15 07:47:52.706409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.018 ms 00:39:14.133 [2024-07-15 07:47:52.706421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:14.133 [2024-07-15 07:47:52.707490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:14.133 [2024-07-15 07:47:52.707536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:39:14.133 [2024-07-15 07:47:52.707556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.942 ms 00:39:14.133 [2024-07-15 07:47:52.707568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:14.392 [2024-07-15 07:47:52.843142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:14.392 [2024-07-15 07:47:52.843268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:39:14.392 [2024-07-15 07:47:52.843307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 135.543 ms 00:39:14.392 [2024-07-15 07:47:52.843323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:14.392 [2024-07-15 07:47:52.859679] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:39:14.392 [2024-07-15 07:47:52.865576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:14.392 [2024-07-15 07:47:52.865648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:39:14.392 [2024-07-15 07:47:52.865684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.062 ms 00:39:14.392 [2024-07-15 07:47:52.865699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:14.392 [2024-07-15 07:47:52.865885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:14.392 [2024-07-15 07:47:52.865911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:39:14.392 [2024-07-15 07:47:52.865928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:39:14.392 [2024-07-15 07:47:52.865943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:14.392 [2024-07-15 07:47:52.866076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:14.392 [2024-07-15 07:47:52.866113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:39:14.392 [2024-07-15 07:47:52.866139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:39:14.392 [2024-07-15 07:47:52.866153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:14.392 [2024-07-15 07:47:52.866198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:14.392 [2024-07-15 07:47:52.866218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:39:14.392 [2024-07-15 07:47:52.866234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:39:14.392 [2024-07-15 07:47:52.866249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:14.392 [2024-07-15 07:47:52.866318] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:39:14.392 [2024-07-15 07:47:52.866341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:14.392 [2024-07-15 07:47:52.866356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:39:14.392 [2024-07-15 07:47:52.866372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:39:14.392 [2024-07-15 07:47:52.866393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:14.392 [2024-07-15 07:47:52.906353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:14.392 [2024-07-15 07:47:52.906423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:39:14.392 [2024-07-15 07:47:52.906446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.917 ms 00:39:14.392 [2024-07-15 07:47:52.906474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:14.392 [2024-07-15 07:47:52.906587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:14.392 [2024-07-15 07:47:52.906611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:39:14.392 [2024-07-15 07:47:52.906650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:39:14.392 [2024-07-15 07:47:52.906665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:14.392 [2024-07-15 07:47:52.908554] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 452.845 ms, result 0 00:39:54.366  Copying: 25/1024 [MB] (25 MBps) Copying: 51/1024 [MB] (26 MBps) Copying: 77/1024 [MB] (26 MBps) Copying: 102/1024 [MB] (25 MBps) Copying: 128/1024 [MB] (25 MBps) Copying: 154/1024 [MB] (26 MBps) Copying: 181/1024 [MB] (27 MBps) Copying: 207/1024 [MB] (25 MBps) Copying: 233/1024 [MB] (26 MBps) Copying: 258/1024 [MB] (24 MBps) Copying: 284/1024 [MB] (26 MBps) Copying: 311/1024 [MB] (26 MBps) Copying: 337/1024 [MB] (26 MBps) Copying: 362/1024 [MB] (25 MBps) Copying: 388/1024 [MB] (26 MBps) Copying: 415/1024 [MB] (26 MBps) Copying: 440/1024 [MB] (25 MBps) Copying: 465/1024 [MB] (24 MBps) Copying: 491/1024 [MB] (26 MBps) Copying: 518/1024 [MB] (26 MBps) Copying: 544/1024 [MB] (25 MBps) Copying: 569/1024 [MB] (25 MBps) Copying: 595/1024 [MB] (25 MBps) Copying: 620/1024 [MB] (25 MBps) Copying: 646/1024 [MB] (25 MBps) Copying: 671/1024 [MB] (25 MBps) Copying: 696/1024 [MB] (25 MBps) Copying: 722/1024 [MB] (25 MBps) Copying: 748/1024 [MB] (26 MBps) Copying: 775/1024 [MB] (26 MBps) Copying: 800/1024 [MB] (25 MBps) Copying: 825/1024 [MB] (24 MBps) Copying: 852/1024 [MB] (26 MBps) Copying: 878/1024 [MB] (26 MBps) Copying: 903/1024 [MB] (24 MBps) Copying: 929/1024 [MB] (25 MBps) Copying: 953/1024 [MB] (24 MBps) Copying: 978/1024 [MB] (25 MBps) Copying: 1003/1024 [MB] (24 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-15 07:48:32.698557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.366 [2024-07-15 07:48:32.698658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:39:54.366 [2024-07-15 07:48:32.698684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:39:54.366 [2024-07-15 07:48:32.698697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.366 [2024-07-15 07:48:32.698729] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:54.366 [2024-07-15 07:48:32.702785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.366 [2024-07-15 07:48:32.702822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:39:54.366 [2024-07-15 07:48:32.702839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.031 ms 00:39:54.366 [2024-07-15 07:48:32.702851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.366 [2024-07-15 07:48:32.704721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.366 [2024-07-15 07:48:32.704768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:39:54.366 [2024-07-15 07:48:32.704794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.840 ms 00:39:54.366 [2024-07-15 07:48:32.704808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.366 [2024-07-15 07:48:32.721253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.366 [2024-07-15 07:48:32.721319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:39:54.366 [2024-07-15 07:48:32.721338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.420 ms 00:39:54.366 [2024-07-15 07:48:32.721351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.366 [2024-07-15 07:48:32.727856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.366 [2024-07-15 07:48:32.727914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:39:54.366 [2024-07-15 07:48:32.727956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.417 ms 00:39:54.366 [2024-07-15 07:48:32.727967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.366 [2024-07-15 07:48:32.758555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.366 [2024-07-15 07:48:32.758662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:39:54.366 [2024-07-15 07:48:32.758700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.476 ms 00:39:54.366 [2024-07-15 07:48:32.758713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.366 [2024-07-15 07:48:32.778053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.366 [2024-07-15 07:48:32.778098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:39:54.366 [2024-07-15 07:48:32.778131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.268 ms 00:39:54.366 [2024-07-15 07:48:32.778144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.366 [2024-07-15 07:48:32.778408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.366 [2024-07-15 07:48:32.778443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:39:54.366 [2024-07-15 07:48:32.778474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:39:54.366 [2024-07-15 07:48:32.778487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.366 [2024-07-15 07:48:32.808437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.366 [2024-07-15 07:48:32.808484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:39:54.366 [2024-07-15 07:48:32.808517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.920 ms 00:39:54.366 [2024-07-15 07:48:32.808528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.366 [2024-07-15 07:48:32.837827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.366 [2024-07-15 07:48:32.837918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:39:54.366 [2024-07-15 07:48:32.837954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.199 ms 00:39:54.366 [2024-07-15 07:48:32.837965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.366 [2024-07-15 07:48:32.867455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.366 [2024-07-15 07:48:32.867501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:39:54.366 [2024-07-15 07:48:32.867533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.414 ms 00:39:54.366 [2024-07-15 07:48:32.867560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.366 [2024-07-15 07:48:32.897455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.366 [2024-07-15 07:48:32.897511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:39:54.366 [2024-07-15 07:48:32.897544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.767 ms 00:39:54.366 [2024-07-15 07:48:32.897555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.366 [2024-07-15 07:48:32.897659] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:39:54.366 [2024-07-15 07:48:32.897687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:39:54.366 [2024-07-15 07:48:32.897702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:39:54.366 [2024-07-15 07:48:32.897715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:39:54.366 [2024-07-15 07:48:32.897728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:39:54.366 [2024-07-15 07:48:32.897741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:39:54.366 [2024-07-15 07:48:32.897753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:39:54.366 [2024-07-15 07:48:32.897766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:39:54.366 [2024-07-15 07:48:32.897778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:39:54.366 [2024-07-15 07:48:32.897790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:39:54.366 [2024-07-15 07:48:32.897802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:39:54.366 [2024-07-15 07:48:32.897814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:39:54.366 [2024-07-15 07:48:32.897826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:39:54.366 [2024-07-15 07:48:32.897838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.897850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.897863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.897876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.897888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.897900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.897912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.897924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.897936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.897958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.897970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.897983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.897995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.898985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.899009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:39:54.367 [2024-07-15 07:48:32.899050] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:39:54.367 [2024-07-15 07:48:32.899062] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 45559ceb-2fe3-42d7-a6cd-26f4649c2042 00:39:54.367 [2024-07-15 07:48:32.899075] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:39:54.367 [2024-07-15 07:48:32.899086] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:39:54.367 [2024-07-15 07:48:32.899098] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:39:54.367 [2024-07-15 07:48:32.899119] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:39:54.367 [2024-07-15 07:48:32.899131] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:39:54.367 [2024-07-15 07:48:32.899144] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:39:54.367 [2024-07-15 07:48:32.899155] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:39:54.367 [2024-07-15 07:48:32.899166] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:39:54.367 [2024-07-15 07:48:32.899177] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:39:54.367 [2024-07-15 07:48:32.899189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.367 [2024-07-15 07:48:32.899201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:39:54.368 [2024-07-15 07:48:32.899214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.532 ms 00:39:54.368 [2024-07-15 07:48:32.899227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.368 [2024-07-15 07:48:32.915961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.368 [2024-07-15 07:48:32.916027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:39:54.368 [2024-07-15 07:48:32.916045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.687 ms 00:39:54.368 [2024-07-15 07:48:32.916072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.368 [2024-07-15 07:48:32.916702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.368 [2024-07-15 07:48:32.916728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:39:54.368 [2024-07-15 07:48:32.916743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:39:54.368 [2024-07-15 07:48:32.916756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.368 [2024-07-15 07:48:32.957585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:54.368 [2024-07-15 07:48:32.957684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:54.368 [2024-07-15 07:48:32.957722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:54.368 [2024-07-15 07:48:32.957735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.368 [2024-07-15 07:48:32.957843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:54.368 [2024-07-15 07:48:32.957859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:54.368 [2024-07-15 07:48:32.957887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:54.368 [2024-07-15 07:48:32.957916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.368 [2024-07-15 07:48:32.958034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:54.368 [2024-07-15 07:48:32.958061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:54.368 [2024-07-15 07:48:32.958075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:54.368 [2024-07-15 07:48:32.958087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.368 [2024-07-15 07:48:32.958112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:54.368 [2024-07-15 07:48:32.958127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:54.368 [2024-07-15 07:48:32.958140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:54.368 [2024-07-15 07:48:32.958152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.627 [2024-07-15 07:48:33.073551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:54.627 [2024-07-15 07:48:33.073622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:54.627 [2024-07-15 07:48:33.073643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:54.627 [2024-07-15 07:48:33.073656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.627 [2024-07-15 07:48:33.161350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:54.627 [2024-07-15 07:48:33.161428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:54.627 [2024-07-15 07:48:33.161482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:54.627 [2024-07-15 07:48:33.161514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.627 [2024-07-15 07:48:33.161620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:54.627 [2024-07-15 07:48:33.161656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:54.627 [2024-07-15 07:48:33.161669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:54.627 [2024-07-15 07:48:33.161691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.627 [2024-07-15 07:48:33.161751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:54.627 [2024-07-15 07:48:33.161767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:54.627 [2024-07-15 07:48:33.161780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:54.627 [2024-07-15 07:48:33.161792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.627 [2024-07-15 07:48:33.161938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:54.627 [2024-07-15 07:48:33.161967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:54.627 [2024-07-15 07:48:33.161982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:54.627 [2024-07-15 07:48:33.162001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.627 [2024-07-15 07:48:33.162055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:54.627 [2024-07-15 07:48:33.162082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:39:54.627 [2024-07-15 07:48:33.162097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:54.627 [2024-07-15 07:48:33.162109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.627 [2024-07-15 07:48:33.162162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:54.627 [2024-07-15 07:48:33.162195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:54.627 [2024-07-15 07:48:33.162207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:54.627 [2024-07-15 07:48:33.162220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.627 [2024-07-15 07:48:33.162288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:54.627 [2024-07-15 07:48:33.162311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:54.627 [2024-07-15 07:48:33.162325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:54.627 [2024-07-15 07:48:33.162337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.627 [2024-07-15 07:48:33.162537] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 463.910 ms, result 0 00:39:56.002 00:39:56.002 00:39:56.002 07:48:34 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:39:56.261 [2024-07-15 07:48:34.643737] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:39:56.261 [2024-07-15 07:48:34.643932] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82569 ] 00:39:56.261 [2024-07-15 07:48:34.822323] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:56.519 [2024-07-15 07:48:35.102518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:57.085 [2024-07-15 07:48:35.492164] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:57.085 [2024-07-15 07:48:35.492268] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:57.085 [2024-07-15 07:48:35.659730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.085 [2024-07-15 07:48:35.659820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:39:57.085 [2024-07-15 07:48:35.659861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:39:57.085 [2024-07-15 07:48:35.659874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.085 [2024-07-15 07:48:35.659980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.085 [2024-07-15 07:48:35.660002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:57.085 [2024-07-15 07:48:35.660015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:39:57.085 [2024-07-15 07:48:35.660032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.085 [2024-07-15 07:48:35.660080] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:39:57.085 [2024-07-15 07:48:35.661178] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:39:57.085 [2024-07-15 07:48:35.661215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.085 [2024-07-15 07:48:35.661235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:57.085 [2024-07-15 07:48:35.661249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.159 ms 00:39:57.085 [2024-07-15 07:48:35.661261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.085 [2024-07-15 07:48:35.663858] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:39:57.085 [2024-07-15 07:48:35.680768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.085 [2024-07-15 07:48:35.680812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:39:57.085 [2024-07-15 07:48:35.680849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.911 ms 00:39:57.085 [2024-07-15 07:48:35.680860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.085 [2024-07-15 07:48:35.681000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.085 [2024-07-15 07:48:35.681022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:39:57.085 [2024-07-15 07:48:35.681041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:39:57.085 [2024-07-15 07:48:35.681053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.085 [2024-07-15 07:48:35.693827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.085 [2024-07-15 07:48:35.693872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:57.085 [2024-07-15 07:48:35.693906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.660 ms 00:39:57.085 [2024-07-15 07:48:35.693919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.085 [2024-07-15 07:48:35.694065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.085 [2024-07-15 07:48:35.694090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:57.085 [2024-07-15 07:48:35.694104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:39:57.085 [2024-07-15 07:48:35.694116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.085 [2024-07-15 07:48:35.694205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.085 [2024-07-15 07:48:35.694235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:39:57.085 [2024-07-15 07:48:35.694251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:39:57.085 [2024-07-15 07:48:35.694262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.085 [2024-07-15 07:48:35.694302] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:39:57.345 [2024-07-15 07:48:35.700667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.345 [2024-07-15 07:48:35.700707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:57.345 [2024-07-15 07:48:35.700741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.376 ms 00:39:57.345 [2024-07-15 07:48:35.700768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.345 [2024-07-15 07:48:35.700857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.345 [2024-07-15 07:48:35.700875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:39:57.345 [2024-07-15 07:48:35.700889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:39:57.345 [2024-07-15 07:48:35.700901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.345 [2024-07-15 07:48:35.700960] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:39:57.345 [2024-07-15 07:48:35.700996] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:39:57.345 [2024-07-15 07:48:35.701053] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:39:57.345 [2024-07-15 07:48:35.701080] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:39:57.345 [2024-07-15 07:48:35.701188] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:39:57.346 [2024-07-15 07:48:35.701216] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:39:57.346 [2024-07-15 07:48:35.701233] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:39:57.346 [2024-07-15 07:48:35.701249] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:39:57.346 [2024-07-15 07:48:35.701264] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:39:57.346 [2024-07-15 07:48:35.701278] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:39:57.346 [2024-07-15 07:48:35.701290] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:39:57.346 [2024-07-15 07:48:35.701302] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:39:57.346 [2024-07-15 07:48:35.701314] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:39:57.346 [2024-07-15 07:48:35.701328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.346 [2024-07-15 07:48:35.701345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:39:57.346 [2024-07-15 07:48:35.701358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:39:57.346 [2024-07-15 07:48:35.701370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.346 [2024-07-15 07:48:35.701484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.346 [2024-07-15 07:48:35.701511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:39:57.346 [2024-07-15 07:48:35.701526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:39:57.346 [2024-07-15 07:48:35.701537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.346 [2024-07-15 07:48:35.701650] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:39:57.346 [2024-07-15 07:48:35.701672] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:39:57.346 [2024-07-15 07:48:35.701693] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:57.346 [2024-07-15 07:48:35.701705] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:57.346 [2024-07-15 07:48:35.701719] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:39:57.346 [2024-07-15 07:48:35.701731] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:39:57.346 [2024-07-15 07:48:35.701758] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:39:57.346 [2024-07-15 07:48:35.701770] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:39:57.346 [2024-07-15 07:48:35.701781] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:39:57.346 [2024-07-15 07:48:35.701792] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:57.346 [2024-07-15 07:48:35.701803] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:39:57.346 [2024-07-15 07:48:35.701814] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:39:57.346 [2024-07-15 07:48:35.701825] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:57.346 [2024-07-15 07:48:35.701836] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:39:57.346 [2024-07-15 07:48:35.701847] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:39:57.346 [2024-07-15 07:48:35.701857] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:57.346 [2024-07-15 07:48:35.701868] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:39:57.346 [2024-07-15 07:48:35.701880] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:39:57.346 [2024-07-15 07:48:35.701891] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:57.346 [2024-07-15 07:48:35.701902] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:39:57.346 [2024-07-15 07:48:35.701927] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:39:57.346 [2024-07-15 07:48:35.701939] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:57.346 [2024-07-15 07:48:35.701951] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:39:57.346 [2024-07-15 07:48:35.701963] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:39:57.346 [2024-07-15 07:48:35.701974] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:57.346 [2024-07-15 07:48:35.701984] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:39:57.346 [2024-07-15 07:48:35.701995] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:39:57.346 [2024-07-15 07:48:35.702007] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:57.346 [2024-07-15 07:48:35.702018] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:39:57.346 [2024-07-15 07:48:35.702029] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:39:57.346 [2024-07-15 07:48:35.702040] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:57.346 [2024-07-15 07:48:35.702050] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:39:57.346 [2024-07-15 07:48:35.702061] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:39:57.346 [2024-07-15 07:48:35.702072] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:57.346 [2024-07-15 07:48:35.702083] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:39:57.346 [2024-07-15 07:48:35.702094] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:39:57.346 [2024-07-15 07:48:35.702107] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:57.346 [2024-07-15 07:48:35.702118] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:39:57.346 [2024-07-15 07:48:35.702130] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:39:57.346 [2024-07-15 07:48:35.702140] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:57.346 [2024-07-15 07:48:35.702152] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:39:57.346 [2024-07-15 07:48:35.702162] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:39:57.346 [2024-07-15 07:48:35.702173] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:57.346 [2024-07-15 07:48:35.702184] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:39:57.346 [2024-07-15 07:48:35.702196] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:39:57.346 [2024-07-15 07:48:35.702209] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:57.346 [2024-07-15 07:48:35.702220] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:57.346 [2024-07-15 07:48:35.702249] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:39:57.346 [2024-07-15 07:48:35.702261] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:39:57.346 [2024-07-15 07:48:35.702272] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:39:57.346 [2024-07-15 07:48:35.702283] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:39:57.346 [2024-07-15 07:48:35.702294] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:39:57.346 [2024-07-15 07:48:35.702306] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:39:57.346 [2024-07-15 07:48:35.702319] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:39:57.346 [2024-07-15 07:48:35.702336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:57.346 [2024-07-15 07:48:35.702350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:39:57.346 [2024-07-15 07:48:35.702363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:39:57.346 [2024-07-15 07:48:35.702375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:39:57.346 [2024-07-15 07:48:35.702387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:39:57.346 [2024-07-15 07:48:35.702400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:39:57.346 [2024-07-15 07:48:35.702413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:39:57.346 [2024-07-15 07:48:35.702426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:39:57.346 [2024-07-15 07:48:35.702438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:39:57.346 [2024-07-15 07:48:35.702450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:39:57.346 [2024-07-15 07:48:35.702462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:39:57.346 [2024-07-15 07:48:35.702474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:39:57.346 [2024-07-15 07:48:35.702503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:39:57.346 [2024-07-15 07:48:35.702517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:39:57.346 [2024-07-15 07:48:35.702530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:39:57.346 [2024-07-15 07:48:35.702544] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:39:57.346 [2024-07-15 07:48:35.702558] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:57.346 [2024-07-15 07:48:35.702572] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:39:57.346 [2024-07-15 07:48:35.702585] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:39:57.346 [2024-07-15 07:48:35.702598] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:39:57.346 [2024-07-15 07:48:35.702611] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:39:57.346 [2024-07-15 07:48:35.702625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.346 [2024-07-15 07:48:35.702643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:39:57.346 [2024-07-15 07:48:35.702657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.040 ms 00:39:57.346 [2024-07-15 07:48:35.702669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.346 [2024-07-15 07:48:35.759440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.346 [2024-07-15 07:48:35.759568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:57.346 [2024-07-15 07:48:35.759593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.694 ms 00:39:57.346 [2024-07-15 07:48:35.759607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.346 [2024-07-15 07:48:35.759785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.346 [2024-07-15 07:48:35.759803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:39:57.346 [2024-07-15 07:48:35.759817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:39:57.346 [2024-07-15 07:48:35.759830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.346 [2024-07-15 07:48:35.807167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.346 [2024-07-15 07:48:35.807243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:57.347 [2024-07-15 07:48:35.807265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.214 ms 00:39:57.347 [2024-07-15 07:48:35.807279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.347 [2024-07-15 07:48:35.807378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.347 [2024-07-15 07:48:35.807396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:57.347 [2024-07-15 07:48:35.807410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:39:57.347 [2024-07-15 07:48:35.807437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.347 [2024-07-15 07:48:35.808382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.347 [2024-07-15 07:48:35.808412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:57.347 [2024-07-15 07:48:35.808428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.799 ms 00:39:57.347 [2024-07-15 07:48:35.808441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.347 [2024-07-15 07:48:35.808690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.347 [2024-07-15 07:48:35.808720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:57.347 [2024-07-15 07:48:35.808735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:39:57.347 [2024-07-15 07:48:35.808747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.347 [2024-07-15 07:48:35.829661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.347 [2024-07-15 07:48:35.829713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:57.347 [2024-07-15 07:48:35.829749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.883 ms 00:39:57.347 [2024-07-15 07:48:35.829762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.347 [2024-07-15 07:48:35.848594] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:39:57.347 [2024-07-15 07:48:35.848693] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:39:57.347 [2024-07-15 07:48:35.848735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.347 [2024-07-15 07:48:35.848748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:39:57.347 [2024-07-15 07:48:35.848765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.776 ms 00:39:57.347 [2024-07-15 07:48:35.848777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.347 [2024-07-15 07:48:35.878726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.347 [2024-07-15 07:48:35.878832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:39:57.347 [2024-07-15 07:48:35.878871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.812 ms 00:39:57.347 [2024-07-15 07:48:35.878904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.347 [2024-07-15 07:48:35.896160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.347 [2024-07-15 07:48:35.896219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:39:57.347 [2024-07-15 07:48:35.896254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.997 ms 00:39:57.347 [2024-07-15 07:48:35.896267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.347 [2024-07-15 07:48:35.910708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.347 [2024-07-15 07:48:35.910799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:39:57.347 [2024-07-15 07:48:35.910817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.375 ms 00:39:57.347 [2024-07-15 07:48:35.910829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.347 [2024-07-15 07:48:35.911881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.347 [2024-07-15 07:48:35.911916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:39:57.347 [2024-07-15 07:48:35.911933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.887 ms 00:39:57.347 [2024-07-15 07:48:35.911945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.606 [2024-07-15 07:48:35.995245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.606 [2024-07-15 07:48:35.995355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:39:57.606 [2024-07-15 07:48:35.995410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.269 ms 00:39:57.606 [2024-07-15 07:48:35.995424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.606 [2024-07-15 07:48:36.008256] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:39:57.606 [2024-07-15 07:48:36.013958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.606 [2024-07-15 07:48:36.014018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:39:57.606 [2024-07-15 07:48:36.014057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.428 ms 00:39:57.606 [2024-07-15 07:48:36.014069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.606 [2024-07-15 07:48:36.014283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.606 [2024-07-15 07:48:36.014315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:39:57.606 [2024-07-15 07:48:36.014331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:39:57.606 [2024-07-15 07:48:36.014343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.606 [2024-07-15 07:48:36.014480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.606 [2024-07-15 07:48:36.014517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:39:57.606 [2024-07-15 07:48:36.014532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:39:57.606 [2024-07-15 07:48:36.014545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.606 [2024-07-15 07:48:36.014584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.606 [2024-07-15 07:48:36.014600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:39:57.606 [2024-07-15 07:48:36.014613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:39:57.606 [2024-07-15 07:48:36.014624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.606 [2024-07-15 07:48:36.014671] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:39:57.606 [2024-07-15 07:48:36.014690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.606 [2024-07-15 07:48:36.014702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:39:57.606 [2024-07-15 07:48:36.014721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:39:57.606 [2024-07-15 07:48:36.014733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.606 [2024-07-15 07:48:36.048791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.606 [2024-07-15 07:48:36.048883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:39:57.606 [2024-07-15 07:48:36.048923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.028 ms 00:39:57.606 [2024-07-15 07:48:36.048937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.606 [2024-07-15 07:48:36.049038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.606 [2024-07-15 07:48:36.049070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:39:57.606 [2024-07-15 07:48:36.049085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:39:57.606 [2024-07-15 07:48:36.049097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.606 [2024-07-15 07:48:36.050772] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 390.429 ms, result 0 00:40:38.793  Copying: 24/1024 [MB] (24 MBps) Copying: 48/1024 [MB] (24 MBps) Copying: 73/1024 [MB] (24 MBps) Copying: 98/1024 [MB] (24 MBps) Copying: 123/1024 [MB] (25 MBps) Copying: 149/1024 [MB] (25 MBps) Copying: 175/1024 [MB] (25 MBps) Copying: 200/1024 [MB] (24 MBps) Copying: 222/1024 [MB] (22 MBps) Copying: 245/1024 [MB] (22 MBps) Copying: 267/1024 [MB] (22 MBps) Copying: 289/1024 [MB] (22 MBps) Copying: 312/1024 [MB] (22 MBps) Copying: 335/1024 [MB] (23 MBps) Copying: 362/1024 [MB] (26 MBps) Copying: 388/1024 [MB] (25 MBps) Copying: 414/1024 [MB] (26 MBps) Copying: 440/1024 [MB] (26 MBps) Copying: 466/1024 [MB] (25 MBps) Copying: 492/1024 [MB] (26 MBps) Copying: 518/1024 [MB] (25 MBps) Copying: 545/1024 [MB] (26 MBps) Copying: 572/1024 [MB] (26 MBps) Copying: 598/1024 [MB] (26 MBps) Copying: 625/1024 [MB] (27 MBps) Copying: 652/1024 [MB] (26 MBps) Copying: 679/1024 [MB] (26 MBps) Copying: 705/1024 [MB] (26 MBps) Copying: 730/1024 [MB] (24 MBps) Copying: 755/1024 [MB] (24 MBps) Copying: 781/1024 [MB] (25 MBps) Copying: 806/1024 [MB] (25 MBps) Copying: 831/1024 [MB] (24 MBps) Copying: 857/1024 [MB] (25 MBps) Copying: 883/1024 [MB] (25 MBps) Copying: 907/1024 [MB] (24 MBps) Copying: 933/1024 [MB] (25 MBps) Copying: 958/1024 [MB] (24 MBps) Copying: 983/1024 [MB] (25 MBps) Copying: 1009/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-15 07:49:17.401705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:38.793 [2024-07-15 07:49:17.401865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:40:38.793 [2024-07-15 07:49:17.401928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:40:38.793 [2024-07-15 07:49:17.401971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:38.793 [2024-07-15 07:49:17.402061] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:40:39.051 [2024-07-15 07:49:17.407299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.051 [2024-07-15 07:49:17.407353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:40:39.051 [2024-07-15 07:49:17.407385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.159 ms 00:40:39.051 [2024-07-15 07:49:17.407408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.051 [2024-07-15 07:49:17.407868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.051 [2024-07-15 07:49:17.407927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:40:39.051 [2024-07-15 07:49:17.407956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:40:39.051 [2024-07-15 07:49:17.407977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.051 [2024-07-15 07:49:17.411905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.051 [2024-07-15 07:49:17.411951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:40:39.051 [2024-07-15 07:49:17.411979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.888 ms 00:40:39.051 [2024-07-15 07:49:17.412009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.051 [2024-07-15 07:49:17.419149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.051 [2024-07-15 07:49:17.419197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:40:39.051 [2024-07-15 07:49:17.419237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.094 ms 00:40:39.051 [2024-07-15 07:49:17.419268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.051 [2024-07-15 07:49:17.452248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.051 [2024-07-15 07:49:17.452308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:40:39.051 [2024-07-15 07:49:17.452337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.802 ms 00:40:39.051 [2024-07-15 07:49:17.452358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.051 [2024-07-15 07:49:17.469753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.051 [2024-07-15 07:49:17.469816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:40:39.051 [2024-07-15 07:49:17.469846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.253 ms 00:40:39.051 [2024-07-15 07:49:17.469868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.051 [2024-07-15 07:49:17.470150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.051 [2024-07-15 07:49:17.470201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:40:39.051 [2024-07-15 07:49:17.470233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:40:39.051 [2024-07-15 07:49:17.470268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.051 [2024-07-15 07:49:17.501968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.051 [2024-07-15 07:49:17.502081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:40:39.051 [2024-07-15 07:49:17.502115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.661 ms 00:40:39.051 [2024-07-15 07:49:17.502135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.051 [2024-07-15 07:49:17.533757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.051 [2024-07-15 07:49:17.533830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:40:39.051 [2024-07-15 07:49:17.533864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.505 ms 00:40:39.051 [2024-07-15 07:49:17.533900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.051 [2024-07-15 07:49:17.564538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.051 [2024-07-15 07:49:17.564609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:40:39.051 [2024-07-15 07:49:17.564659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.573 ms 00:40:39.051 [2024-07-15 07:49:17.564681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.051 [2024-07-15 07:49:17.596074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.052 [2024-07-15 07:49:17.596142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:40:39.052 [2024-07-15 07:49:17.596170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.170 ms 00:40:39.052 [2024-07-15 07:49:17.596192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.052 [2024-07-15 07:49:17.596254] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:40:39.052 [2024-07-15 07:49:17.596293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.596319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.596345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.596370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.596389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.596413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.596429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.596443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.596486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.596512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.596536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.596560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.596581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.596614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.596633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.596658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.596683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.596699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.596722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.596745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.596766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.596782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.596806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.596832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.596854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.596870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.596895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.596917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.596942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.596966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.596990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:40:39.052 [2024-07-15 07:49:17.597937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:40:39.053 [2024-07-15 07:49:17.597961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:40:39.053 [2024-07-15 07:49:17.597985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:40:39.053 [2024-07-15 07:49:17.598007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:40:39.053 [2024-07-15 07:49:17.598031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:40:39.053 [2024-07-15 07:49:17.598057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:40:39.053 [2024-07-15 07:49:17.598084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:40:39.053 [2024-07-15 07:49:17.598108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:40:39.053 [2024-07-15 07:49:17.598130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:40:39.053 [2024-07-15 07:49:17.598152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:40:39.053 [2024-07-15 07:49:17.598174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:40:39.053 [2024-07-15 07:49:17.598198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:40:39.053 [2024-07-15 07:49:17.598224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:40:39.053 [2024-07-15 07:49:17.598255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:40:39.053 [2024-07-15 07:49:17.598279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:40:39.053 [2024-07-15 07:49:17.598302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:40:39.053 [2024-07-15 07:49:17.598324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:40:39.053 [2024-07-15 07:49:17.598348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:40:39.053 [2024-07-15 07:49:17.598370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:40:39.053 [2024-07-15 07:49:17.598396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:40:39.053 [2024-07-15 07:49:17.598422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:40:39.053 [2024-07-15 07:49:17.598443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:40:39.053 [2024-07-15 07:49:17.598491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:40:39.053 [2024-07-15 07:49:17.598519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:40:39.053 [2024-07-15 07:49:17.598544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:40:39.053 [2024-07-15 07:49:17.598566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:40:39.053 [2024-07-15 07:49:17.598580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:40:39.053 [2024-07-15 07:49:17.598597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:40:39.053 [2024-07-15 07:49:17.598620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:40:39.053 [2024-07-15 07:49:17.598646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:40:39.053 [2024-07-15 07:49:17.598672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:40:39.053 [2024-07-15 07:49:17.598708] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:40:39.053 [2024-07-15 07:49:17.598731] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 45559ceb-2fe3-42d7-a6cd-26f4649c2042 00:40:39.053 [2024-07-15 07:49:17.598756] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:40:39.053 [2024-07-15 07:49:17.598779] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:40:39.053 [2024-07-15 07:49:17.598815] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:40:39.053 [2024-07-15 07:49:17.598834] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:40:39.053 [2024-07-15 07:49:17.598847] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:40:39.053 [2024-07-15 07:49:17.598868] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:40:39.053 [2024-07-15 07:49:17.598889] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:40:39.053 [2024-07-15 07:49:17.598911] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:40:39.053 [2024-07-15 07:49:17.598933] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:40:39.053 [2024-07-15 07:49:17.598956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.053 [2024-07-15 07:49:17.598979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:40:39.053 [2024-07-15 07:49:17.599011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.703 ms 00:40:39.053 [2024-07-15 07:49:17.599033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.053 [2024-07-15 07:49:17.619418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.053 [2024-07-15 07:49:17.619532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:40:39.053 [2024-07-15 07:49:17.619586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.299 ms 00:40:39.053 [2024-07-15 07:49:17.619606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.053 [2024-07-15 07:49:17.620365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:39.053 [2024-07-15 07:49:17.620447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:40:39.053 [2024-07-15 07:49:17.620492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.684 ms 00:40:39.053 [2024-07-15 07:49:17.620517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.311 [2024-07-15 07:49:17.663316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:39.311 [2024-07-15 07:49:17.663393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:39.311 [2024-07-15 07:49:17.663426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:39.311 [2024-07-15 07:49:17.663469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.311 [2024-07-15 07:49:17.663631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:39.311 [2024-07-15 07:49:17.663658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:39.311 [2024-07-15 07:49:17.663681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:39.311 [2024-07-15 07:49:17.663703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.311 [2024-07-15 07:49:17.663844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:39.311 [2024-07-15 07:49:17.663885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:39.311 [2024-07-15 07:49:17.663911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:39.311 [2024-07-15 07:49:17.663930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.311 [2024-07-15 07:49:17.663974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:39.311 [2024-07-15 07:49:17.663997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:39.311 [2024-07-15 07:49:17.664014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:39.311 [2024-07-15 07:49:17.664036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.311 [2024-07-15 07:49:17.779979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:39.311 [2024-07-15 07:49:17.780076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:39.311 [2024-07-15 07:49:17.780109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:39.311 [2024-07-15 07:49:17.780128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.311 [2024-07-15 07:49:17.874718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:39.311 [2024-07-15 07:49:17.874831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:39.311 [2024-07-15 07:49:17.874866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:39.311 [2024-07-15 07:49:17.874889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.311 [2024-07-15 07:49:17.875040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:39.311 [2024-07-15 07:49:17.875068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:39.311 [2024-07-15 07:49:17.875108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:39.311 [2024-07-15 07:49:17.875150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.311 [2024-07-15 07:49:17.875223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:39.311 [2024-07-15 07:49:17.875250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:39.311 [2024-07-15 07:49:17.875271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:39.311 [2024-07-15 07:49:17.875292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.311 [2024-07-15 07:49:17.875500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:39.311 [2024-07-15 07:49:17.875532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:39.311 [2024-07-15 07:49:17.875562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:39.311 [2024-07-15 07:49:17.875576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.311 [2024-07-15 07:49:17.875663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:39.311 [2024-07-15 07:49:17.875690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:40:39.311 [2024-07-15 07:49:17.875725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:39.311 [2024-07-15 07:49:17.875747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.311 [2024-07-15 07:49:17.875824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:39.311 [2024-07-15 07:49:17.875854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:39.311 [2024-07-15 07:49:17.875878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:39.311 [2024-07-15 07:49:17.875913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.311 [2024-07-15 07:49:17.876005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:39.311 [2024-07-15 07:49:17.876034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:39.311 [2024-07-15 07:49:17.876059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:39.311 [2024-07-15 07:49:17.876082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:39.311 [2024-07-15 07:49:17.876343] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 474.600 ms, result 0 00:40:40.683 00:40:40.683 00:40:40.683 07:49:19 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:40:43.300 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:40:43.300 07:49:21 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:40:43.300 [2024-07-15 07:49:21.473616] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:40:43.300 [2024-07-15 07:49:21.473794] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83029 ] 00:40:43.300 [2024-07-15 07:49:21.643240] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:43.558 [2024-07-15 07:49:21.943713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:43.815 [2024-07-15 07:49:22.337148] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:43.815 [2024-07-15 07:49:22.337266] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:44.074 [2024-07-15 07:49:22.503414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.074 [2024-07-15 07:49:22.503525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:40:44.074 [2024-07-15 07:49:22.503549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:40:44.074 [2024-07-15 07:49:22.503563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.074 [2024-07-15 07:49:22.503663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.074 [2024-07-15 07:49:22.503686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:44.074 [2024-07-15 07:49:22.503700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:40:44.074 [2024-07-15 07:49:22.503716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.074 [2024-07-15 07:49:22.503752] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:40:44.074 [2024-07-15 07:49:22.504816] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:40:44.074 [2024-07-15 07:49:22.504854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.074 [2024-07-15 07:49:22.504874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:44.074 [2024-07-15 07:49:22.504887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.111 ms 00:40:44.074 [2024-07-15 07:49:22.504899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.074 [2024-07-15 07:49:22.507380] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:40:44.074 [2024-07-15 07:49:22.527713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.074 [2024-07-15 07:49:22.527840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:40:44.074 [2024-07-15 07:49:22.527881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.329 ms 00:40:44.074 [2024-07-15 07:49:22.527894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.074 [2024-07-15 07:49:22.528061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.074 [2024-07-15 07:49:22.528084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:40:44.074 [2024-07-15 07:49:22.528103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:40:44.074 [2024-07-15 07:49:22.528115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.074 [2024-07-15 07:49:22.542473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.074 [2024-07-15 07:49:22.542598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:44.074 [2024-07-15 07:49:22.542620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.197 ms 00:40:44.074 [2024-07-15 07:49:22.542633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.074 [2024-07-15 07:49:22.542788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.074 [2024-07-15 07:49:22.542814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:44.074 [2024-07-15 07:49:22.542829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:40:44.074 [2024-07-15 07:49:22.542840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.074 [2024-07-15 07:49:22.542970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.074 [2024-07-15 07:49:22.542990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:40:44.074 [2024-07-15 07:49:22.543018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:40:44.074 [2024-07-15 07:49:22.543031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.074 [2024-07-15 07:49:22.543075] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:40:44.074 [2024-07-15 07:49:22.548892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.074 [2024-07-15 07:49:22.548954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:44.074 [2024-07-15 07:49:22.548971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.829 ms 00:40:44.074 [2024-07-15 07:49:22.548983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.074 [2024-07-15 07:49:22.549057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.074 [2024-07-15 07:49:22.549076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:40:44.074 [2024-07-15 07:49:22.549090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:40:44.074 [2024-07-15 07:49:22.549102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.074 [2024-07-15 07:49:22.549163] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:40:44.074 [2024-07-15 07:49:22.549201] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:40:44.074 [2024-07-15 07:49:22.549250] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:40:44.074 [2024-07-15 07:49:22.549277] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:40:44.074 [2024-07-15 07:49:22.549386] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:40:44.074 [2024-07-15 07:49:22.549403] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:40:44.074 [2024-07-15 07:49:22.549418] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:40:44.074 [2024-07-15 07:49:22.549435] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:40:44.074 [2024-07-15 07:49:22.549464] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:40:44.074 [2024-07-15 07:49:22.549481] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:40:44.074 [2024-07-15 07:49:22.549493] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:40:44.074 [2024-07-15 07:49:22.549505] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:40:44.074 [2024-07-15 07:49:22.549516] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:40:44.074 [2024-07-15 07:49:22.549529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.074 [2024-07-15 07:49:22.549547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:40:44.074 [2024-07-15 07:49:22.549559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:40:44.074 [2024-07-15 07:49:22.549571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.074 [2024-07-15 07:49:22.549669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.074 [2024-07-15 07:49:22.549685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:40:44.074 [2024-07-15 07:49:22.549698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:40:44.074 [2024-07-15 07:49:22.549709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.074 [2024-07-15 07:49:22.549820] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:40:44.074 [2024-07-15 07:49:22.549844] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:40:44.074 [2024-07-15 07:49:22.549865] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:44.074 [2024-07-15 07:49:22.549877] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:44.074 [2024-07-15 07:49:22.549889] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:40:44.074 [2024-07-15 07:49:22.549900] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:40:44.074 [2024-07-15 07:49:22.549911] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:40:44.074 [2024-07-15 07:49:22.549925] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:40:44.074 [2024-07-15 07:49:22.549936] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:40:44.074 [2024-07-15 07:49:22.549947] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:44.074 [2024-07-15 07:49:22.549958] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:40:44.074 [2024-07-15 07:49:22.549969] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:40:44.074 [2024-07-15 07:49:22.549979] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:44.074 [2024-07-15 07:49:22.549990] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:40:44.074 [2024-07-15 07:49:22.550001] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:40:44.074 [2024-07-15 07:49:22.550011] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:44.074 [2024-07-15 07:49:22.550022] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:40:44.074 [2024-07-15 07:49:22.550033] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:40:44.074 [2024-07-15 07:49:22.550044] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:44.074 [2024-07-15 07:49:22.550055] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:40:44.074 [2024-07-15 07:49:22.550084] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:40:44.075 [2024-07-15 07:49:22.550096] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:44.075 [2024-07-15 07:49:22.550106] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:40:44.075 [2024-07-15 07:49:22.550117] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:40:44.075 [2024-07-15 07:49:22.550128] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:44.075 [2024-07-15 07:49:22.550138] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:40:44.075 [2024-07-15 07:49:22.550149] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:40:44.075 [2024-07-15 07:49:22.550159] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:44.075 [2024-07-15 07:49:22.550170] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:40:44.075 [2024-07-15 07:49:22.550180] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:40:44.075 [2024-07-15 07:49:22.550190] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:44.075 [2024-07-15 07:49:22.550201] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:40:44.075 [2024-07-15 07:49:22.550211] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:40:44.075 [2024-07-15 07:49:22.550221] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:44.075 [2024-07-15 07:49:22.550232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:40:44.075 [2024-07-15 07:49:22.550243] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:40:44.075 [2024-07-15 07:49:22.550253] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:44.075 [2024-07-15 07:49:22.550264] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:40:44.075 [2024-07-15 07:49:22.550275] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:40:44.075 [2024-07-15 07:49:22.550287] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:44.075 [2024-07-15 07:49:22.550297] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:40:44.075 [2024-07-15 07:49:22.550308] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:40:44.075 [2024-07-15 07:49:22.550319] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:44.075 [2024-07-15 07:49:22.550329] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:40:44.075 [2024-07-15 07:49:22.550341] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:40:44.075 [2024-07-15 07:49:22.550352] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:44.075 [2024-07-15 07:49:22.550363] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:44.075 [2024-07-15 07:49:22.550375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:40:44.075 [2024-07-15 07:49:22.550387] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:40:44.075 [2024-07-15 07:49:22.550398] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:40:44.075 [2024-07-15 07:49:22.550409] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:40:44.075 [2024-07-15 07:49:22.550419] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:40:44.075 [2024-07-15 07:49:22.550430] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:40:44.075 [2024-07-15 07:49:22.550442] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:40:44.075 [2024-07-15 07:49:22.550473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:44.075 [2024-07-15 07:49:22.550487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:40:44.075 [2024-07-15 07:49:22.550499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:40:44.075 [2024-07-15 07:49:22.550511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:40:44.075 [2024-07-15 07:49:22.550523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:40:44.075 [2024-07-15 07:49:22.550535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:40:44.075 [2024-07-15 07:49:22.550546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:40:44.075 [2024-07-15 07:49:22.550558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:40:44.075 [2024-07-15 07:49:22.550569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:40:44.075 [2024-07-15 07:49:22.550581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:40:44.075 [2024-07-15 07:49:22.550593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:40:44.075 [2024-07-15 07:49:22.550604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:40:44.075 [2024-07-15 07:49:22.550615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:40:44.075 [2024-07-15 07:49:22.550627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:40:44.075 [2024-07-15 07:49:22.550638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:40:44.075 [2024-07-15 07:49:22.550650] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:40:44.075 [2024-07-15 07:49:22.550663] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:44.075 [2024-07-15 07:49:22.550677] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:40:44.075 [2024-07-15 07:49:22.550690] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:40:44.075 [2024-07-15 07:49:22.550702] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:40:44.075 [2024-07-15 07:49:22.550714] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:40:44.075 [2024-07-15 07:49:22.550728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.075 [2024-07-15 07:49:22.550746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:40:44.075 [2024-07-15 07:49:22.550759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.972 ms 00:40:44.075 [2024-07-15 07:49:22.550771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.075 [2024-07-15 07:49:22.610026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.075 [2024-07-15 07:49:22.610129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:44.075 [2024-07-15 07:49:22.610168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.176 ms 00:40:44.075 [2024-07-15 07:49:22.610182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.075 [2024-07-15 07:49:22.610330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.075 [2024-07-15 07:49:22.610348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:40:44.075 [2024-07-15 07:49:22.610362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:40:44.075 [2024-07-15 07:49:22.610373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.075 [2024-07-15 07:49:22.658354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.075 [2024-07-15 07:49:22.658480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:44.075 [2024-07-15 07:49:22.658507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.831 ms 00:40:44.075 [2024-07-15 07:49:22.658520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.075 [2024-07-15 07:49:22.658615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.075 [2024-07-15 07:49:22.658635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:44.075 [2024-07-15 07:49:22.658649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:40:44.075 [2024-07-15 07:49:22.658661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.075 [2024-07-15 07:49:22.659564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.075 [2024-07-15 07:49:22.659594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:44.075 [2024-07-15 07:49:22.659625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.785 ms 00:40:44.075 [2024-07-15 07:49:22.659638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.075 [2024-07-15 07:49:22.659845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.075 [2024-07-15 07:49:22.659867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:44.075 [2024-07-15 07:49:22.659881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:40:44.075 [2024-07-15 07:49:22.659893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.075 [2024-07-15 07:49:22.681292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.075 [2024-07-15 07:49:22.681386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:44.075 [2024-07-15 07:49:22.681410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.364 ms 00:40:44.075 [2024-07-15 07:49:22.681424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.378 [2024-07-15 07:49:22.700110] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:40:44.378 [2024-07-15 07:49:22.700220] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:40:44.378 [2024-07-15 07:49:22.700262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.378 [2024-07-15 07:49:22.700276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:40:44.378 [2024-07-15 07:49:22.700294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.575 ms 00:40:44.378 [2024-07-15 07:49:22.700306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.378 [2024-07-15 07:49:22.731772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.378 [2024-07-15 07:49:22.731894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:40:44.378 [2024-07-15 07:49:22.731918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.355 ms 00:40:44.378 [2024-07-15 07:49:22.731951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.378 [2024-07-15 07:49:22.749915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.379 [2024-07-15 07:49:22.749992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:40:44.379 [2024-07-15 07:49:22.750014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.840 ms 00:40:44.379 [2024-07-15 07:49:22.750027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.379 [2024-07-15 07:49:22.765383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.379 [2024-07-15 07:49:22.765444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:40:44.379 [2024-07-15 07:49:22.765473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.298 ms 00:40:44.379 [2024-07-15 07:49:22.765486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.379 [2024-07-15 07:49:22.766537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.379 [2024-07-15 07:49:22.766574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:40:44.379 [2024-07-15 07:49:22.766591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.903 ms 00:40:44.379 [2024-07-15 07:49:22.766603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.379 [2024-07-15 07:49:22.853721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.379 [2024-07-15 07:49:22.854089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:40:44.379 [2024-07-15 07:49:22.854200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.089 ms 00:40:44.379 [2024-07-15 07:49:22.854280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.379 [2024-07-15 07:49:22.867204] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:40:44.379 [2024-07-15 07:49:22.872681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.379 [2024-07-15 07:49:22.872824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:40:44.379 [2024-07-15 07:49:22.872946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.229 ms 00:40:44.379 [2024-07-15 07:49:22.873041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.379 [2024-07-15 07:49:22.873287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.379 [2024-07-15 07:49:22.873391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:40:44.379 [2024-07-15 07:49:22.873492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:40:44.379 [2024-07-15 07:49:22.873578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.379 [2024-07-15 07:49:22.873765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.379 [2024-07-15 07:49:22.873857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:40:44.379 [2024-07-15 07:49:22.873939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:40:44.379 [2024-07-15 07:49:22.874016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.379 [2024-07-15 07:49:22.874112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.379 [2024-07-15 07:49:22.874192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:40:44.379 [2024-07-15 07:49:22.874271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:40:44.379 [2024-07-15 07:49:22.874352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.379 [2024-07-15 07:49:22.874413] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:40:44.379 [2024-07-15 07:49:22.874433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.379 [2024-07-15 07:49:22.874446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:40:44.379 [2024-07-15 07:49:22.874490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:40:44.379 [2024-07-15 07:49:22.874502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.379 [2024-07-15 07:49:22.909957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.379 [2024-07-15 07:49:22.910072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:40:44.379 [2024-07-15 07:49:22.910096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.410 ms 00:40:44.379 [2024-07-15 07:49:22.910110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.379 [2024-07-15 07:49:22.910277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.379 [2024-07-15 07:49:22.910316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:40:44.379 [2024-07-15 07:49:22.910331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:40:44.379 [2024-07-15 07:49:22.910343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.379 [2024-07-15 07:49:22.912204] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 408.199 ms, result 0 00:41:25.692  Copying: 26/1024 [MB] (26 MBps) Copying: 52/1024 [MB] (26 MBps) Copying: 78/1024 [MB] (25 MBps) Copying: 103/1024 [MB] (25 MBps) Copying: 129/1024 [MB] (25 MBps) Copying: 154/1024 [MB] (25 MBps) Copying: 180/1024 [MB] (25 MBps) Copying: 205/1024 [MB] (25 MBps) Copying: 231/1024 [MB] (25 MBps) Copying: 257/1024 [MB] (26 MBps) Copying: 283/1024 [MB] (25 MBps) Copying: 310/1024 [MB] (27 MBps) Copying: 336/1024 [MB] (25 MBps) Copying: 362/1024 [MB] (26 MBps) Copying: 389/1024 [MB] (27 MBps) Copying: 414/1024 [MB] (25 MBps) Copying: 439/1024 [MB] (24 MBps) Copying: 465/1024 [MB] (25 MBps) Copying: 490/1024 [MB] (25 MBps) Copying: 516/1024 [MB] (25 MBps) Copying: 540/1024 [MB] (24 MBps) Copying: 567/1024 [MB] (26 MBps) Copying: 593/1024 [MB] (26 MBps) Copying: 619/1024 [MB] (25 MBps) Copying: 644/1024 [MB] (25 MBps) Copying: 670/1024 [MB] (25 MBps) Copying: 695/1024 [MB] (25 MBps) Copying: 720/1024 [MB] (24 MBps) Copying: 746/1024 [MB] (25 MBps) Copying: 771/1024 [MB] (24 MBps) Copying: 796/1024 [MB] (24 MBps) Copying: 820/1024 [MB] (24 MBps) Copying: 845/1024 [MB] (24 MBps) Copying: 869/1024 [MB] (23 MBps) Copying: 893/1024 [MB] (24 MBps) Copying: 917/1024 [MB] (24 MBps) Copying: 942/1024 [MB] (24 MBps) Copying: 967/1024 [MB] (25 MBps) Copying: 993/1024 [MB] (25 MBps) Copying: 1018/1024 [MB] (24 MBps) Copying: 1048304/1048576 [kB] (5308 kBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-07-15 07:50:04.244613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:25.692 [2024-07-15 07:50:04.244739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:41:25.692 [2024-07-15 07:50:04.244777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:41:25.692 [2024-07-15 07:50:04.244790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:25.692 [2024-07-15 07:50:04.248395] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:41:25.692 [2024-07-15 07:50:04.255419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:25.692 [2024-07-15 07:50:04.255486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:41:25.692 [2024-07-15 07:50:04.255505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.950 ms 00:41:25.692 [2024-07-15 07:50:04.255517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:25.692 [2024-07-15 07:50:04.268125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:25.692 [2024-07-15 07:50:04.268206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:41:25.692 [2024-07-15 07:50:04.268242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.454 ms 00:41:25.692 [2024-07-15 07:50:04.268255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:25.692 [2024-07-15 07:50:04.290367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:25.692 [2024-07-15 07:50:04.290494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:41:25.692 [2024-07-15 07:50:04.290539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.078 ms 00:41:25.692 [2024-07-15 07:50:04.290552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:25.692 [2024-07-15 07:50:04.296910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:25.692 [2024-07-15 07:50:04.296966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:41:25.692 [2024-07-15 07:50:04.296997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.306 ms 00:41:25.692 [2024-07-15 07:50:04.297008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:25.951 [2024-07-15 07:50:04.331573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:25.951 [2024-07-15 07:50:04.331647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:41:25.951 [2024-07-15 07:50:04.331669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.476 ms 00:41:25.951 [2024-07-15 07:50:04.331682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:25.951 [2024-07-15 07:50:04.349179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:25.951 [2024-07-15 07:50:04.349241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:41:25.951 [2024-07-15 07:50:04.349276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.447 ms 00:41:25.951 [2024-07-15 07:50:04.349302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:25.951 [2024-07-15 07:50:04.443101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:25.951 [2024-07-15 07:50:04.443207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:41:25.951 [2024-07-15 07:50:04.443230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.743 ms 00:41:25.951 [2024-07-15 07:50:04.443243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:25.951 [2024-07-15 07:50:04.475549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:25.951 [2024-07-15 07:50:04.475604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:41:25.951 [2024-07-15 07:50:04.475637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.266 ms 00:41:25.951 [2024-07-15 07:50:04.475649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:25.951 [2024-07-15 07:50:04.505219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:25.951 [2024-07-15 07:50:04.505261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:41:25.951 [2024-07-15 07:50:04.505278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.527 ms 00:41:25.951 [2024-07-15 07:50:04.505289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:25.951 [2024-07-15 07:50:04.534882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:25.951 [2024-07-15 07:50:04.534936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:41:25.951 [2024-07-15 07:50:04.534983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.551 ms 00:41:25.951 [2024-07-15 07:50:04.534996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:25.951 [2024-07-15 07:50:04.564006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:25.951 [2024-07-15 07:50:04.564047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:41:25.951 [2024-07-15 07:50:04.564064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.893 ms 00:41:25.951 [2024-07-15 07:50:04.564076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:26.212 [2024-07-15 07:50:04.564119] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:41:26.212 [2024-07-15 07:50:04.564144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 118272 / 261120 wr_cnt: 1 state: open 00:41:26.212 [2024-07-15 07:50:04.564161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.564994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.565007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.565019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.565032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.565045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.565057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.565070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.565082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.565095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.565108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.565121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.565134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.565146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.565158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.565171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.565184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:41:26.212 [2024-07-15 07:50:04.565197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:41:26.213 [2024-07-15 07:50:04.565209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:41:26.213 [2024-07-15 07:50:04.565222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:41:26.213 [2024-07-15 07:50:04.565235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:41:26.213 [2024-07-15 07:50:04.565248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:41:26.213 [2024-07-15 07:50:04.565260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:41:26.213 [2024-07-15 07:50:04.565273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:41:26.213 [2024-07-15 07:50:04.565286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:41:26.213 [2024-07-15 07:50:04.565298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:41:26.213 [2024-07-15 07:50:04.565311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:41:26.213 [2024-07-15 07:50:04.565324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:41:26.213 [2024-07-15 07:50:04.565336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:41:26.213 [2024-07-15 07:50:04.565349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:41:26.213 [2024-07-15 07:50:04.565362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:41:26.213 [2024-07-15 07:50:04.565375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:41:26.213 [2024-07-15 07:50:04.565388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:41:26.213 [2024-07-15 07:50:04.565401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:41:26.213 [2024-07-15 07:50:04.565414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:41:26.213 [2024-07-15 07:50:04.565426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:41:26.213 [2024-07-15 07:50:04.565439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:41:26.213 [2024-07-15 07:50:04.565470] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:41:26.213 [2024-07-15 07:50:04.565484] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 45559ceb-2fe3-42d7-a6cd-26f4649c2042 00:41:26.213 [2024-07-15 07:50:04.565497] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 118272 00:41:26.213 [2024-07-15 07:50:04.565509] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 119232 00:41:26.213 [2024-07-15 07:50:04.565521] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 118272 00:41:26.213 [2024-07-15 07:50:04.565533] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0081 00:41:26.213 [2024-07-15 07:50:04.565545] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:41:26.213 [2024-07-15 07:50:04.565564] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:41:26.213 [2024-07-15 07:50:04.565575] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:41:26.213 [2024-07-15 07:50:04.565586] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:41:26.213 [2024-07-15 07:50:04.565597] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:41:26.213 [2024-07-15 07:50:04.565609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:26.213 [2024-07-15 07:50:04.565626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:41:26.213 [2024-07-15 07:50:04.565639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.492 ms 00:41:26.213 [2024-07-15 07:50:04.565651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:26.213 [2024-07-15 07:50:04.583319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:26.213 [2024-07-15 07:50:04.583362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:41:26.213 [2024-07-15 07:50:04.583394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.624 ms 00:41:26.213 [2024-07-15 07:50:04.583408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:26.213 [2024-07-15 07:50:04.583953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:26.213 [2024-07-15 07:50:04.583989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:41:26.213 [2024-07-15 07:50:04.584005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:41:26.213 [2024-07-15 07:50:04.584017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:26.213 [2024-07-15 07:50:04.624834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:26.213 [2024-07-15 07:50:04.624889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:41:26.213 [2024-07-15 07:50:04.624907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:26.213 [2024-07-15 07:50:04.624926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:26.213 [2024-07-15 07:50:04.625023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:26.213 [2024-07-15 07:50:04.625039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:41:26.213 [2024-07-15 07:50:04.625052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:26.213 [2024-07-15 07:50:04.625063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:26.213 [2024-07-15 07:50:04.625161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:26.213 [2024-07-15 07:50:04.625181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:41:26.213 [2024-07-15 07:50:04.625194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:26.213 [2024-07-15 07:50:04.625206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:26.213 [2024-07-15 07:50:04.625236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:26.213 [2024-07-15 07:50:04.625258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:41:26.213 [2024-07-15 07:50:04.625272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:26.213 [2024-07-15 07:50:04.625283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:26.213 [2024-07-15 07:50:04.737178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:26.213 [2024-07-15 07:50:04.737250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:41:26.213 [2024-07-15 07:50:04.737287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:26.213 [2024-07-15 07:50:04.737299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:26.472 [2024-07-15 07:50:04.826204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:26.472 [2024-07-15 07:50:04.826275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:41:26.472 [2024-07-15 07:50:04.826310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:26.472 [2024-07-15 07:50:04.826323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:26.472 [2024-07-15 07:50:04.826447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:26.472 [2024-07-15 07:50:04.826481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:41:26.472 [2024-07-15 07:50:04.826495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:26.472 [2024-07-15 07:50:04.826507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:26.472 [2024-07-15 07:50:04.826578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:26.472 [2024-07-15 07:50:04.826604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:41:26.472 [2024-07-15 07:50:04.826617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:26.472 [2024-07-15 07:50:04.826628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:26.472 [2024-07-15 07:50:04.826770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:26.472 [2024-07-15 07:50:04.826796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:41:26.472 [2024-07-15 07:50:04.826810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:26.472 [2024-07-15 07:50:04.826821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:26.472 [2024-07-15 07:50:04.826872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:26.472 [2024-07-15 07:50:04.826890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:41:26.472 [2024-07-15 07:50:04.826910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:26.472 [2024-07-15 07:50:04.826922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:26.472 [2024-07-15 07:50:04.826974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:26.472 [2024-07-15 07:50:04.826990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:41:26.472 [2024-07-15 07:50:04.827015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:26.472 [2024-07-15 07:50:04.827028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:26.472 [2024-07-15 07:50:04.827115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:26.472 [2024-07-15 07:50:04.827150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:41:26.472 [2024-07-15 07:50:04.827164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:26.472 [2024-07-15 07:50:04.827176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:26.472 [2024-07-15 07:50:04.827348] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 584.857 ms, result 0 00:41:28.372 00:41:28.372 00:41:28.372 07:50:06 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:41:28.372 [2024-07-15 07:50:06.667285] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:41:28.372 [2024-07-15 07:50:06.667535] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83473 ] 00:41:28.372 [2024-07-15 07:50:06.846597] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:28.629 [2024-07-15 07:50:07.115885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:29.196 [2024-07-15 07:50:07.502223] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:41:29.196 [2024-07-15 07:50:07.502321] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:41:29.196 [2024-07-15 07:50:07.668503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.196 [2024-07-15 07:50:07.668568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:41:29.196 [2024-07-15 07:50:07.668606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:41:29.196 [2024-07-15 07:50:07.668619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.196 [2024-07-15 07:50:07.668692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.196 [2024-07-15 07:50:07.668714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:41:29.196 [2024-07-15 07:50:07.668731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:41:29.196 [2024-07-15 07:50:07.668748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.196 [2024-07-15 07:50:07.668779] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:41:29.196 [2024-07-15 07:50:07.669764] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:41:29.196 [2024-07-15 07:50:07.669803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.196 [2024-07-15 07:50:07.669822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:41:29.196 [2024-07-15 07:50:07.669835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.030 ms 00:41:29.196 [2024-07-15 07:50:07.669848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.196 [2024-07-15 07:50:07.672599] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:41:29.196 [2024-07-15 07:50:07.690056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.196 [2024-07-15 07:50:07.690146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:41:29.196 [2024-07-15 07:50:07.690185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.455 ms 00:41:29.196 [2024-07-15 07:50:07.690199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.196 [2024-07-15 07:50:07.690363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.196 [2024-07-15 07:50:07.690386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:41:29.196 [2024-07-15 07:50:07.690406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:41:29.196 [2024-07-15 07:50:07.690418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.196 [2024-07-15 07:50:07.703895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.196 [2024-07-15 07:50:07.703966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:41:29.196 [2024-07-15 07:50:07.704002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.315 ms 00:41:29.196 [2024-07-15 07:50:07.704015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.196 [2024-07-15 07:50:07.704163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.196 [2024-07-15 07:50:07.704213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:41:29.196 [2024-07-15 07:50:07.704228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:41:29.196 [2024-07-15 07:50:07.704240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.196 [2024-07-15 07:50:07.704364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.196 [2024-07-15 07:50:07.704390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:41:29.196 [2024-07-15 07:50:07.704405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:41:29.196 [2024-07-15 07:50:07.704416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.196 [2024-07-15 07:50:07.704484] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:41:29.196 [2024-07-15 07:50:07.710163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.196 [2024-07-15 07:50:07.710212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:41:29.196 [2024-07-15 07:50:07.710227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.718 ms 00:41:29.196 [2024-07-15 07:50:07.710239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.196 [2024-07-15 07:50:07.710293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.196 [2024-07-15 07:50:07.710309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:41:29.196 [2024-07-15 07:50:07.710322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:41:29.196 [2024-07-15 07:50:07.710333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.196 [2024-07-15 07:50:07.710376] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:41:29.196 [2024-07-15 07:50:07.710410] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:41:29.196 [2024-07-15 07:50:07.710454] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:41:29.196 [2024-07-15 07:50:07.710495] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:41:29.196 [2024-07-15 07:50:07.710619] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:41:29.196 [2024-07-15 07:50:07.710635] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:41:29.196 [2024-07-15 07:50:07.710649] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:41:29.196 [2024-07-15 07:50:07.710664] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:41:29.196 [2024-07-15 07:50:07.710679] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:41:29.196 [2024-07-15 07:50:07.710692] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:41:29.196 [2024-07-15 07:50:07.710705] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:41:29.196 [2024-07-15 07:50:07.710716] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:41:29.196 [2024-07-15 07:50:07.710727] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:41:29.196 [2024-07-15 07:50:07.710740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.196 [2024-07-15 07:50:07.710756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:41:29.196 [2024-07-15 07:50:07.710769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:41:29.196 [2024-07-15 07:50:07.710780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.196 [2024-07-15 07:50:07.710880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.196 [2024-07-15 07:50:07.710894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:41:29.196 [2024-07-15 07:50:07.710907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:41:29.196 [2024-07-15 07:50:07.710918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.196 [2024-07-15 07:50:07.711047] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:41:29.196 [2024-07-15 07:50:07.711065] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:41:29.196 [2024-07-15 07:50:07.711095] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:41:29.196 [2024-07-15 07:50:07.711107] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:29.196 [2024-07-15 07:50:07.711119] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:41:29.196 [2024-07-15 07:50:07.711129] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:41:29.196 [2024-07-15 07:50:07.711140] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:41:29.196 [2024-07-15 07:50:07.711152] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:41:29.196 [2024-07-15 07:50:07.711163] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:41:29.196 [2024-07-15 07:50:07.711173] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:41:29.197 [2024-07-15 07:50:07.711184] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:41:29.197 [2024-07-15 07:50:07.711194] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:41:29.197 [2024-07-15 07:50:07.711204] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:41:29.197 [2024-07-15 07:50:07.711215] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:41:29.197 [2024-07-15 07:50:07.711227] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:41:29.197 [2024-07-15 07:50:07.711238] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:29.197 [2024-07-15 07:50:07.711249] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:41:29.197 [2024-07-15 07:50:07.711260] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:41:29.197 [2024-07-15 07:50:07.711271] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:29.197 [2024-07-15 07:50:07.711283] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:41:29.197 [2024-07-15 07:50:07.711307] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:41:29.197 [2024-07-15 07:50:07.711319] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:29.197 [2024-07-15 07:50:07.711330] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:41:29.197 [2024-07-15 07:50:07.711341] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:41:29.197 [2024-07-15 07:50:07.711351] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:29.197 [2024-07-15 07:50:07.711376] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:41:29.197 [2024-07-15 07:50:07.711387] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:41:29.197 [2024-07-15 07:50:07.711397] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:29.197 [2024-07-15 07:50:07.711407] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:41:29.197 [2024-07-15 07:50:07.711417] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:41:29.197 [2024-07-15 07:50:07.711427] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:29.197 [2024-07-15 07:50:07.711437] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:41:29.197 [2024-07-15 07:50:07.711448] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:41:29.197 [2024-07-15 07:50:07.711458] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:41:29.197 [2024-07-15 07:50:07.711468] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:41:29.197 [2024-07-15 07:50:07.711479] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:41:29.197 [2024-07-15 07:50:07.711508] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:41:29.197 [2024-07-15 07:50:07.711519] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:41:29.197 [2024-07-15 07:50:07.711530] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:41:29.197 [2024-07-15 07:50:07.711541] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:29.197 [2024-07-15 07:50:07.711552] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:41:29.197 [2024-07-15 07:50:07.711564] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:41:29.197 [2024-07-15 07:50:07.711576] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:29.197 [2024-07-15 07:50:07.711586] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:41:29.197 [2024-07-15 07:50:07.711598] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:41:29.197 [2024-07-15 07:50:07.711609] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:41:29.197 [2024-07-15 07:50:07.711630] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:29.197 [2024-07-15 07:50:07.711643] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:41:29.197 [2024-07-15 07:50:07.711653] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:41:29.197 [2024-07-15 07:50:07.711664] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:41:29.197 [2024-07-15 07:50:07.711675] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:41:29.197 [2024-07-15 07:50:07.711685] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:41:29.197 [2024-07-15 07:50:07.711696] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:41:29.197 [2024-07-15 07:50:07.711708] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:41:29.197 [2024-07-15 07:50:07.711722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:29.197 [2024-07-15 07:50:07.711737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:41:29.197 [2024-07-15 07:50:07.711748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:41:29.197 [2024-07-15 07:50:07.711760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:41:29.197 [2024-07-15 07:50:07.711771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:41:29.197 [2024-07-15 07:50:07.711782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:41:29.197 [2024-07-15 07:50:07.711793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:41:29.197 [2024-07-15 07:50:07.711804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:41:29.197 [2024-07-15 07:50:07.711815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:41:29.197 [2024-07-15 07:50:07.711827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:41:29.197 [2024-07-15 07:50:07.711838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:41:29.197 [2024-07-15 07:50:07.711849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:41:29.197 [2024-07-15 07:50:07.711860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:41:29.197 [2024-07-15 07:50:07.711871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:41:29.197 [2024-07-15 07:50:07.711882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:41:29.197 [2024-07-15 07:50:07.711893] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:41:29.197 [2024-07-15 07:50:07.711906] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:29.197 [2024-07-15 07:50:07.711925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:41:29.197 [2024-07-15 07:50:07.711937] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:41:29.197 [2024-07-15 07:50:07.711948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:41:29.197 [2024-07-15 07:50:07.711960] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:41:29.197 [2024-07-15 07:50:07.711972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.197 [2024-07-15 07:50:07.711984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:41:29.197 [2024-07-15 07:50:07.711996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.010 ms 00:41:29.197 [2024-07-15 07:50:07.712015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.197 [2024-07-15 07:50:07.781436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.197 [2024-07-15 07:50:07.781561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:41:29.197 [2024-07-15 07:50:07.781584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.351 ms 00:41:29.197 [2024-07-15 07:50:07.781597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.197 [2024-07-15 07:50:07.781740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.197 [2024-07-15 07:50:07.781757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:41:29.197 [2024-07-15 07:50:07.781772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:41:29.197 [2024-07-15 07:50:07.781784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.456 [2024-07-15 07:50:07.828307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.456 [2024-07-15 07:50:07.828386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:41:29.456 [2024-07-15 07:50:07.828404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.398 ms 00:41:29.456 [2024-07-15 07:50:07.828417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.456 [2024-07-15 07:50:07.828533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.456 [2024-07-15 07:50:07.828551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:41:29.456 [2024-07-15 07:50:07.828564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:41:29.456 [2024-07-15 07:50:07.828581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.456 [2024-07-15 07:50:07.829473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.456 [2024-07-15 07:50:07.829517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:41:29.456 [2024-07-15 07:50:07.829532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.804 ms 00:41:29.456 [2024-07-15 07:50:07.829544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.456 [2024-07-15 07:50:07.829764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.456 [2024-07-15 07:50:07.829782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:41:29.456 [2024-07-15 07:50:07.829795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:41:29.456 [2024-07-15 07:50:07.829806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.456 [2024-07-15 07:50:07.849154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.456 [2024-07-15 07:50:07.849207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:41:29.456 [2024-07-15 07:50:07.849223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.303 ms 00:41:29.456 [2024-07-15 07:50:07.849235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.456 [2024-07-15 07:50:07.865529] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:41:29.456 [2024-07-15 07:50:07.865581] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:41:29.456 [2024-07-15 07:50:07.865599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.456 [2024-07-15 07:50:07.865611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:41:29.456 [2024-07-15 07:50:07.865623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.234 ms 00:41:29.456 [2024-07-15 07:50:07.865634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.456 [2024-07-15 07:50:07.892891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.456 [2024-07-15 07:50:07.893025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:41:29.456 [2024-07-15 07:50:07.893060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.207 ms 00:41:29.456 [2024-07-15 07:50:07.893072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.456 [2024-07-15 07:50:07.910926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.456 [2024-07-15 07:50:07.910992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:41:29.456 [2024-07-15 07:50:07.911033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.754 ms 00:41:29.457 [2024-07-15 07:50:07.911046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.457 [2024-07-15 07:50:07.925756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.457 [2024-07-15 07:50:07.925809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:41:29.457 [2024-07-15 07:50:07.925826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.659 ms 00:41:29.457 [2024-07-15 07:50:07.925837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.457 [2024-07-15 07:50:07.926823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.457 [2024-07-15 07:50:07.926857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:41:29.457 [2024-07-15 07:50:07.926873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.869 ms 00:41:29.457 [2024-07-15 07:50:07.926885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.457 [2024-07-15 07:50:08.018661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.457 [2024-07-15 07:50:08.018746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:41:29.457 [2024-07-15 07:50:08.018769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.742 ms 00:41:29.457 [2024-07-15 07:50:08.018797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.457 [2024-07-15 07:50:08.031316] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:41:29.457 [2024-07-15 07:50:08.036474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.457 [2024-07-15 07:50:08.036527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:41:29.457 [2024-07-15 07:50:08.036544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.547 ms 00:41:29.457 [2024-07-15 07:50:08.036555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.457 [2024-07-15 07:50:08.036670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.457 [2024-07-15 07:50:08.036704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:41:29.457 [2024-07-15 07:50:08.036717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:41:29.457 [2024-07-15 07:50:08.036728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.457 [2024-07-15 07:50:08.039284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.457 [2024-07-15 07:50:08.039316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:41:29.457 [2024-07-15 07:50:08.039331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.458 ms 00:41:29.457 [2024-07-15 07:50:08.039372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.457 [2024-07-15 07:50:08.039419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.457 [2024-07-15 07:50:08.039449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:41:29.457 [2024-07-15 07:50:08.039461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:41:29.457 [2024-07-15 07:50:08.039483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.457 [2024-07-15 07:50:08.039553] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:41:29.457 [2024-07-15 07:50:08.039572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.457 [2024-07-15 07:50:08.039589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:41:29.457 [2024-07-15 07:50:08.039601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:41:29.457 [2024-07-15 07:50:08.039613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.759 [2024-07-15 07:50:08.070326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.759 [2024-07-15 07:50:08.070441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:41:29.759 [2024-07-15 07:50:08.070463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.681 ms 00:41:29.759 [2024-07-15 07:50:08.070476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.759 [2024-07-15 07:50:08.070669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.759 [2024-07-15 07:50:08.070689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:41:29.759 [2024-07-15 07:50:08.070703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:41:29.759 [2024-07-15 07:50:08.070731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.759 [2024-07-15 07:50:08.078234] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 407.755 ms, result 0 00:42:11.165  Copying: 23/1024 [MB] (23 MBps) Copying: 49/1024 [MB] (25 MBps) Copying: 76/1024 [MB] (26 MBps) Copying: 102/1024 [MB] (25 MBps) Copying: 127/1024 [MB] (25 MBps) Copying: 153/1024 [MB] (25 MBps) Copying: 179/1024 [MB] (25 MBps) Copying: 204/1024 [MB] (24 MBps) Copying: 228/1024 [MB] (24 MBps) Copying: 254/1024 [MB] (25 MBps) Copying: 280/1024 [MB] (25 MBps) Copying: 305/1024 [MB] (25 MBps) Copying: 330/1024 [MB] (25 MBps) Copying: 356/1024 [MB] (26 MBps) Copying: 382/1024 [MB] (25 MBps) Copying: 408/1024 [MB] (26 MBps) Copying: 435/1024 [MB] (26 MBps) Copying: 462/1024 [MB] (27 MBps) Copying: 488/1024 [MB] (26 MBps) Copying: 515/1024 [MB] (26 MBps) Copying: 539/1024 [MB] (24 MBps) Copying: 563/1024 [MB] (23 MBps) Copying: 587/1024 [MB] (23 MBps) Copying: 609/1024 [MB] (22 MBps) Copying: 633/1024 [MB] (23 MBps) Copying: 656/1024 [MB] (23 MBps) Copying: 679/1024 [MB] (23 MBps) Copying: 703/1024 [MB] (23 MBps) Copying: 728/1024 [MB] (24 MBps) Copying: 752/1024 [MB] (23 MBps) Copying: 776/1024 [MB] (24 MBps) Copying: 800/1024 [MB] (24 MBps) Copying: 825/1024 [MB] (24 MBps) Copying: 849/1024 [MB] (24 MBps) Copying: 872/1024 [MB] (23 MBps) Copying: 894/1024 [MB] (22 MBps) Copying: 918/1024 [MB] (23 MBps) Copying: 943/1024 [MB] (25 MBps) Copying: 968/1024 [MB] (25 MBps) Copying: 994/1024 [MB] (25 MBps) Copying: 1019/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-07-15 07:50:49.565159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:11.165 [2024-07-15 07:50:49.565271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:42:11.165 [2024-07-15 07:50:49.565303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:42:11.165 [2024-07-15 07:50:49.565322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.165 [2024-07-15 07:50:49.565369] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:42:11.165 [2024-07-15 07:50:49.572921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:11.165 [2024-07-15 07:50:49.572994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:42:11.165 [2024-07-15 07:50:49.573023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.496 ms 00:42:11.165 [2024-07-15 07:50:49.573046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.165 [2024-07-15 07:50:49.573553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:11.165 [2024-07-15 07:50:49.573610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:42:11.165 [2024-07-15 07:50:49.573653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.454 ms 00:42:11.165 [2024-07-15 07:50:49.573690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.165 [2024-07-15 07:50:49.581307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:11.165 [2024-07-15 07:50:49.581359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:42:11.165 [2024-07-15 07:50:49.581376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.570 ms 00:42:11.165 [2024-07-15 07:50:49.581389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.165 [2024-07-15 07:50:49.588075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:11.165 [2024-07-15 07:50:49.588154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:42:11.165 [2024-07-15 07:50:49.588171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.643 ms 00:42:11.165 [2024-07-15 07:50:49.588183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.165 [2024-07-15 07:50:49.621745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:11.165 [2024-07-15 07:50:49.621794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:42:11.165 [2024-07-15 07:50:49.621829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.488 ms 00:42:11.165 [2024-07-15 07:50:49.621841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.165 [2024-07-15 07:50:49.640875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:11.165 [2024-07-15 07:50:49.640922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:42:11.165 [2024-07-15 07:50:49.640962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.987 ms 00:42:11.165 [2024-07-15 07:50:49.640975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.165 [2024-07-15 07:50:49.767568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:11.165 [2024-07-15 07:50:49.767653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:42:11.165 [2024-07-15 07:50:49.767677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 126.536 ms 00:42:11.165 [2024-07-15 07:50:49.767691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.425 [2024-07-15 07:50:49.802715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:11.425 [2024-07-15 07:50:49.802810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:42:11.425 [2024-07-15 07:50:49.802847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.996 ms 00:42:11.425 [2024-07-15 07:50:49.802860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.425 [2024-07-15 07:50:49.835237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:11.425 [2024-07-15 07:50:49.835319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:42:11.425 [2024-07-15 07:50:49.835341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.302 ms 00:42:11.425 [2024-07-15 07:50:49.835354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.425 [2024-07-15 07:50:49.864903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:11.425 [2024-07-15 07:50:49.864975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:42:11.425 [2024-07-15 07:50:49.864997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.458 ms 00:42:11.425 [2024-07-15 07:50:49.865039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.425 [2024-07-15 07:50:49.895323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:11.425 [2024-07-15 07:50:49.895370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:42:11.425 [2024-07-15 07:50:49.895390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.162 ms 00:42:11.425 [2024-07-15 07:50:49.895403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.425 [2024-07-15 07:50:49.895449] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:42:11.425 [2024-07-15 07:50:49.895486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:42:11.425 [2024-07-15 07:50:49.895504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.895995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.896008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.896020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.896033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.896046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.896059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.896071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.896084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.896097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.896109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.896122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.896134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.896147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.896159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.896172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.896186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.896198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.896211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.896223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.896236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.896248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.896261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.896274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.896287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.896299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:42:11.425 [2024-07-15 07:50:49.896311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:42:11.426 [2024-07-15 07:50:49.896795] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:42:11.426 [2024-07-15 07:50:49.896807] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 45559ceb-2fe3-42d7-a6cd-26f4649c2042 00:42:11.426 [2024-07-15 07:50:49.896820] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:42:11.426 [2024-07-15 07:50:49.896832] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 16576 00:42:11.426 [2024-07-15 07:50:49.896844] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 15616 00:42:11.426 [2024-07-15 07:50:49.896856] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0615 00:42:11.426 [2024-07-15 07:50:49.896875] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:42:11.426 [2024-07-15 07:50:49.896888] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:42:11.426 [2024-07-15 07:50:49.896900] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:42:11.426 [2024-07-15 07:50:49.896911] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:42:11.426 [2024-07-15 07:50:49.896923] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:42:11.426 [2024-07-15 07:50:49.896935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:11.426 [2024-07-15 07:50:49.896952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:42:11.426 [2024-07-15 07:50:49.896964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.489 ms 00:42:11.426 [2024-07-15 07:50:49.896989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.426 [2024-07-15 07:50:49.914291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:11.426 [2024-07-15 07:50:49.914345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:42:11.426 [2024-07-15 07:50:49.914380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.256 ms 00:42:11.426 [2024-07-15 07:50:49.914409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.426 [2024-07-15 07:50:49.915002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:11.426 [2024-07-15 07:50:49.915033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:42:11.426 [2024-07-15 07:50:49.915049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:42:11.426 [2024-07-15 07:50:49.915067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.426 [2024-07-15 07:50:49.955253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:11.426 [2024-07-15 07:50:49.955359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:42:11.426 [2024-07-15 07:50:49.955402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:11.426 [2024-07-15 07:50:49.955423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.426 [2024-07-15 07:50:49.955566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:11.426 [2024-07-15 07:50:49.955585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:42:11.426 [2024-07-15 07:50:49.955598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:11.426 [2024-07-15 07:50:49.955610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.426 [2024-07-15 07:50:49.955720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:11.426 [2024-07-15 07:50:49.955739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:42:11.426 [2024-07-15 07:50:49.955753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:11.426 [2024-07-15 07:50:49.955764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.426 [2024-07-15 07:50:49.955795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:11.426 [2024-07-15 07:50:49.955809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:42:11.426 [2024-07-15 07:50:49.955822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:11.426 [2024-07-15 07:50:49.955834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.685 [2024-07-15 07:50:50.070558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:11.685 [2024-07-15 07:50:50.070644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:42:11.685 [2024-07-15 07:50:50.070666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:11.685 [2024-07-15 07:50:50.070687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.685 [2024-07-15 07:50:50.159489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:11.685 [2024-07-15 07:50:50.159571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:42:11.685 [2024-07-15 07:50:50.159593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:11.685 [2024-07-15 07:50:50.159606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.685 [2024-07-15 07:50:50.159710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:11.685 [2024-07-15 07:50:50.159729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:42:11.685 [2024-07-15 07:50:50.159742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:11.685 [2024-07-15 07:50:50.159756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.685 [2024-07-15 07:50:50.159806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:11.685 [2024-07-15 07:50:50.159830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:42:11.685 [2024-07-15 07:50:50.159843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:11.685 [2024-07-15 07:50:50.159855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.685 [2024-07-15 07:50:50.160018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:11.685 [2024-07-15 07:50:50.160038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:42:11.685 [2024-07-15 07:50:50.160052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:11.685 [2024-07-15 07:50:50.160063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.685 [2024-07-15 07:50:50.160119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:11.685 [2024-07-15 07:50:50.160144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:42:11.685 [2024-07-15 07:50:50.160157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:11.685 [2024-07-15 07:50:50.160169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.685 [2024-07-15 07:50:50.160222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:11.685 [2024-07-15 07:50:50.160241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:42:11.685 [2024-07-15 07:50:50.160256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:11.685 [2024-07-15 07:50:50.160275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.685 [2024-07-15 07:50:50.160336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:11.685 [2024-07-15 07:50:50.160359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:42:11.685 [2024-07-15 07:50:50.160372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:11.685 [2024-07-15 07:50:50.160384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.685 [2024-07-15 07:50:50.160562] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 595.366 ms, result 0 00:42:13.059 00:42:13.059 00:42:13.059 07:50:51 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:42:15.615 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:42:15.615 07:50:53 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:42:15.615 07:50:53 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:42:15.615 07:50:53 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:42:15.615 07:50:53 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:42:15.615 07:50:53 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:42:15.615 07:50:53 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 81898 00:42:15.615 07:50:53 ftl.ftl_restore -- common/autotest_common.sh@948 -- # '[' -z 81898 ']' 00:42:15.615 07:50:53 ftl.ftl_restore -- common/autotest_common.sh@952 -- # kill -0 81898 00:42:15.615 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (81898) - No such process 00:42:15.615 Process with pid 81898 is not found 00:42:15.615 07:50:53 ftl.ftl_restore -- common/autotest_common.sh@975 -- # echo 'Process with pid 81898 is not found' 00:42:15.615 07:50:53 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:42:15.615 Remove shared memory files 00:42:15.615 07:50:53 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:42:15.615 07:50:53 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:42:15.615 07:50:53 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:42:15.615 07:50:53 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:42:15.615 07:50:53 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:42:15.615 07:50:53 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:42:15.615 00:42:15.615 real 3m23.846s 00:42:15.615 user 3m8.229s 00:42:15.615 sys 0m17.592s 00:42:15.615 07:50:53 ftl.ftl_restore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:15.615 07:50:53 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:42:15.615 ************************************ 00:42:15.615 END TEST ftl_restore 00:42:15.615 ************************************ 00:42:15.615 07:50:53 ftl -- common/autotest_common.sh@1142 -- # return 0 00:42:15.615 07:50:53 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:42:15.615 07:50:53 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:42:15.615 07:50:53 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:15.615 07:50:53 ftl -- common/autotest_common.sh@10 -- # set +x 00:42:15.615 ************************************ 00:42:15.615 START TEST ftl_dirty_shutdown 00:42:15.615 ************************************ 00:42:15.615 07:50:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:42:15.615 * Looking for test storage... 00:42:15.615 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:42:15.615 07:50:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:42:15.615 07:50:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:42:15.615 07:50:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:42:15.615 07:50:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:42:15.615 07:50:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:42:15.615 07:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:42:15.616 07:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:42:15.616 07:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:42:15.616 07:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:42:15.616 07:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=83987 00:42:15.616 07:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 83987 00:42:15.616 07:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:42:15.616 07:50:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@829 -- # '[' -z 83987 ']' 00:42:15.616 07:50:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:15.616 07:50:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:15.616 07:50:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:15.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:15.616 07:50:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:15.616 07:50:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:42:15.616 [2024-07-15 07:50:54.125057] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:42:15.616 [2024-07-15 07:50:54.125253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83987 ] 00:42:15.874 [2024-07-15 07:50:54.293216] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:16.132 [2024-07-15 07:50:54.591242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:17.067 07:50:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:17.067 07:50:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # return 0 00:42:17.067 07:50:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:42:17.067 07:50:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:42:17.067 07:50:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:42:17.067 07:50:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:42:17.067 07:50:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:42:17.067 07:50:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:42:17.326 07:50:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:42:17.326 07:50:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:42:17.326 07:50:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:42:17.326 07:50:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:42:17.326 07:50:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:42:17.327 07:50:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:42:17.327 07:50:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:42:17.327 07:50:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:42:17.586 07:50:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:42:17.586 { 00:42:17.586 "name": "nvme0n1", 00:42:17.586 "aliases": [ 00:42:17.586 "e8b7c1ab-9e01-4914-8e31-b31bde7cfd56" 00:42:17.586 ], 00:42:17.586 "product_name": "NVMe disk", 00:42:17.586 "block_size": 4096, 00:42:17.586 "num_blocks": 1310720, 00:42:17.586 "uuid": "e8b7c1ab-9e01-4914-8e31-b31bde7cfd56", 00:42:17.586 "assigned_rate_limits": { 00:42:17.586 "rw_ios_per_sec": 0, 00:42:17.586 "rw_mbytes_per_sec": 0, 00:42:17.586 "r_mbytes_per_sec": 0, 00:42:17.586 "w_mbytes_per_sec": 0 00:42:17.586 }, 00:42:17.586 "claimed": true, 00:42:17.586 "claim_type": "read_many_write_one", 00:42:17.586 "zoned": false, 00:42:17.586 "supported_io_types": { 00:42:17.586 "read": true, 00:42:17.586 "write": true, 00:42:17.586 "unmap": true, 00:42:17.586 "flush": true, 00:42:17.586 "reset": true, 00:42:17.586 "nvme_admin": true, 00:42:17.586 "nvme_io": true, 00:42:17.586 "nvme_io_md": false, 00:42:17.586 "write_zeroes": true, 00:42:17.586 "zcopy": false, 00:42:17.586 "get_zone_info": false, 00:42:17.586 "zone_management": false, 00:42:17.586 "zone_append": false, 00:42:17.586 "compare": true, 00:42:17.586 "compare_and_write": false, 00:42:17.586 "abort": true, 00:42:17.586 "seek_hole": false, 00:42:17.586 "seek_data": false, 00:42:17.586 "copy": true, 00:42:17.586 "nvme_iov_md": false 00:42:17.586 }, 00:42:17.586 "driver_specific": { 00:42:17.586 "nvme": [ 00:42:17.586 { 00:42:17.586 "pci_address": "0000:00:11.0", 00:42:17.586 "trid": { 00:42:17.586 "trtype": "PCIe", 00:42:17.586 "traddr": "0000:00:11.0" 00:42:17.586 }, 00:42:17.586 "ctrlr_data": { 00:42:17.586 "cntlid": 0, 00:42:17.586 "vendor_id": "0x1b36", 00:42:17.586 "model_number": "QEMU NVMe Ctrl", 00:42:17.586 "serial_number": "12341", 00:42:17.586 "firmware_revision": "8.0.0", 00:42:17.586 "subnqn": "nqn.2019-08.org.qemu:12341", 00:42:17.586 "oacs": { 00:42:17.586 "security": 0, 00:42:17.586 "format": 1, 00:42:17.586 "firmware": 0, 00:42:17.586 "ns_manage": 1 00:42:17.586 }, 00:42:17.586 "multi_ctrlr": false, 00:42:17.586 "ana_reporting": false 00:42:17.586 }, 00:42:17.586 "vs": { 00:42:17.586 "nvme_version": "1.4" 00:42:17.586 }, 00:42:17.586 "ns_data": { 00:42:17.586 "id": 1, 00:42:17.586 "can_share": false 00:42:17.586 } 00:42:17.586 } 00:42:17.586 ], 00:42:17.586 "mp_policy": "active_passive" 00:42:17.586 } 00:42:17.586 } 00:42:17.586 ]' 00:42:17.586 07:50:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:42:17.586 07:50:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:42:17.586 07:50:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:42:17.845 07:50:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:42:17.845 07:50:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:42:17.845 07:50:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:42:17.845 07:50:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:42:17.845 07:50:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:42:17.845 07:50:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:42:17.845 07:50:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:42:17.845 07:50:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:42:17.845 07:50:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=39ff8240-741b-4d59-9b43-6090d24c275e 00:42:17.845 07:50:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:42:17.845 07:50:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 39ff8240-741b-4d59-9b43-6090d24c275e 00:42:18.411 07:50:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:42:18.669 07:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=f1fa679b-4ccd-4839-916a-0104a4ff10e7 00:42:18.669 07:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f1fa679b-4ccd-4839-916a-0104a4ff10e7 00:42:18.927 07:50:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=f336b212-cb17-441f-9829-7bcc3159d752 00:42:18.927 07:50:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:42:18.927 07:50:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f336b212-cb17-441f-9829-7bcc3159d752 00:42:18.927 07:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:42:18.927 07:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:42:18.927 07:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=f336b212-cb17-441f-9829-7bcc3159d752 00:42:18.927 07:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:42:18.927 07:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size f336b212-cb17-441f-9829-7bcc3159d752 00:42:18.927 07:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=f336b212-cb17-441f-9829-7bcc3159d752 00:42:18.927 07:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:42:18.927 07:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:42:18.927 07:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:42:18.927 07:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f336b212-cb17-441f-9829-7bcc3159d752 00:42:19.185 07:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:42:19.185 { 00:42:19.185 "name": "f336b212-cb17-441f-9829-7bcc3159d752", 00:42:19.185 "aliases": [ 00:42:19.185 "lvs/nvme0n1p0" 00:42:19.185 ], 00:42:19.185 "product_name": "Logical Volume", 00:42:19.185 "block_size": 4096, 00:42:19.185 "num_blocks": 26476544, 00:42:19.185 "uuid": "f336b212-cb17-441f-9829-7bcc3159d752", 00:42:19.185 "assigned_rate_limits": { 00:42:19.185 "rw_ios_per_sec": 0, 00:42:19.185 "rw_mbytes_per_sec": 0, 00:42:19.185 "r_mbytes_per_sec": 0, 00:42:19.185 "w_mbytes_per_sec": 0 00:42:19.185 }, 00:42:19.185 "claimed": false, 00:42:19.185 "zoned": false, 00:42:19.185 "supported_io_types": { 00:42:19.185 "read": true, 00:42:19.185 "write": true, 00:42:19.185 "unmap": true, 00:42:19.185 "flush": false, 00:42:19.185 "reset": true, 00:42:19.185 "nvme_admin": false, 00:42:19.185 "nvme_io": false, 00:42:19.185 "nvme_io_md": false, 00:42:19.185 "write_zeroes": true, 00:42:19.185 "zcopy": false, 00:42:19.185 "get_zone_info": false, 00:42:19.185 "zone_management": false, 00:42:19.185 "zone_append": false, 00:42:19.185 "compare": false, 00:42:19.185 "compare_and_write": false, 00:42:19.185 "abort": false, 00:42:19.185 "seek_hole": true, 00:42:19.185 "seek_data": true, 00:42:19.185 "copy": false, 00:42:19.185 "nvme_iov_md": false 00:42:19.185 }, 00:42:19.185 "driver_specific": { 00:42:19.185 "lvol": { 00:42:19.185 "lvol_store_uuid": "f1fa679b-4ccd-4839-916a-0104a4ff10e7", 00:42:19.185 "base_bdev": "nvme0n1", 00:42:19.185 "thin_provision": true, 00:42:19.185 "num_allocated_clusters": 0, 00:42:19.185 "snapshot": false, 00:42:19.185 "clone": false, 00:42:19.185 "esnap_clone": false 00:42:19.185 } 00:42:19.185 } 00:42:19.185 } 00:42:19.185 ]' 00:42:19.185 07:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:42:19.185 07:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:42:19.185 07:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:42:19.185 07:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:42:19.185 07:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:42:19.185 07:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:42:19.185 07:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:42:19.185 07:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:42:19.185 07:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:42:19.453 07:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:42:19.453 07:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:42:19.453 07:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size f336b212-cb17-441f-9829-7bcc3159d752 00:42:19.453 07:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=f336b212-cb17-441f-9829-7bcc3159d752 00:42:19.453 07:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:42:19.453 07:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:42:19.453 07:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:42:19.453 07:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f336b212-cb17-441f-9829-7bcc3159d752 00:42:19.733 07:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:42:19.733 { 00:42:19.733 "name": "f336b212-cb17-441f-9829-7bcc3159d752", 00:42:19.733 "aliases": [ 00:42:19.733 "lvs/nvme0n1p0" 00:42:19.733 ], 00:42:19.733 "product_name": "Logical Volume", 00:42:19.733 "block_size": 4096, 00:42:19.733 "num_blocks": 26476544, 00:42:19.733 "uuid": "f336b212-cb17-441f-9829-7bcc3159d752", 00:42:19.733 "assigned_rate_limits": { 00:42:19.733 "rw_ios_per_sec": 0, 00:42:19.733 "rw_mbytes_per_sec": 0, 00:42:19.733 "r_mbytes_per_sec": 0, 00:42:19.733 "w_mbytes_per_sec": 0 00:42:19.733 }, 00:42:19.733 "claimed": false, 00:42:19.733 "zoned": false, 00:42:19.733 "supported_io_types": { 00:42:19.733 "read": true, 00:42:19.733 "write": true, 00:42:19.733 "unmap": true, 00:42:19.733 "flush": false, 00:42:19.733 "reset": true, 00:42:19.733 "nvme_admin": false, 00:42:19.733 "nvme_io": false, 00:42:19.733 "nvme_io_md": false, 00:42:19.733 "write_zeroes": true, 00:42:19.733 "zcopy": false, 00:42:19.733 "get_zone_info": false, 00:42:19.733 "zone_management": false, 00:42:19.733 "zone_append": false, 00:42:19.733 "compare": false, 00:42:19.733 "compare_and_write": false, 00:42:19.733 "abort": false, 00:42:19.733 "seek_hole": true, 00:42:19.733 "seek_data": true, 00:42:19.733 "copy": false, 00:42:19.733 "nvme_iov_md": false 00:42:19.733 }, 00:42:19.733 "driver_specific": { 00:42:19.733 "lvol": { 00:42:19.733 "lvol_store_uuid": "f1fa679b-4ccd-4839-916a-0104a4ff10e7", 00:42:19.733 "base_bdev": "nvme0n1", 00:42:19.733 "thin_provision": true, 00:42:19.733 "num_allocated_clusters": 0, 00:42:19.733 "snapshot": false, 00:42:19.733 "clone": false, 00:42:19.733 "esnap_clone": false 00:42:19.733 } 00:42:19.733 } 00:42:19.733 } 00:42:19.733 ]' 00:42:19.733 07:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:42:19.733 07:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:42:19.733 07:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:42:19.733 07:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:42:19.733 07:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:42:19.733 07:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:42:19.733 07:50:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:42:19.733 07:50:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:42:20.299 07:50:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:42:20.299 07:50:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size f336b212-cb17-441f-9829-7bcc3159d752 00:42:20.299 07:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=f336b212-cb17-441f-9829-7bcc3159d752 00:42:20.299 07:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:42:20.299 07:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:42:20.299 07:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:42:20.299 07:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f336b212-cb17-441f-9829-7bcc3159d752 00:42:20.299 07:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:42:20.299 { 00:42:20.299 "name": "f336b212-cb17-441f-9829-7bcc3159d752", 00:42:20.299 "aliases": [ 00:42:20.299 "lvs/nvme0n1p0" 00:42:20.299 ], 00:42:20.299 "product_name": "Logical Volume", 00:42:20.299 "block_size": 4096, 00:42:20.299 "num_blocks": 26476544, 00:42:20.299 "uuid": "f336b212-cb17-441f-9829-7bcc3159d752", 00:42:20.299 "assigned_rate_limits": { 00:42:20.299 "rw_ios_per_sec": 0, 00:42:20.299 "rw_mbytes_per_sec": 0, 00:42:20.299 "r_mbytes_per_sec": 0, 00:42:20.299 "w_mbytes_per_sec": 0 00:42:20.299 }, 00:42:20.299 "claimed": false, 00:42:20.299 "zoned": false, 00:42:20.299 "supported_io_types": { 00:42:20.299 "read": true, 00:42:20.299 "write": true, 00:42:20.299 "unmap": true, 00:42:20.299 "flush": false, 00:42:20.299 "reset": true, 00:42:20.299 "nvme_admin": false, 00:42:20.299 "nvme_io": false, 00:42:20.299 "nvme_io_md": false, 00:42:20.299 "write_zeroes": true, 00:42:20.299 "zcopy": false, 00:42:20.299 "get_zone_info": false, 00:42:20.299 "zone_management": false, 00:42:20.299 "zone_append": false, 00:42:20.299 "compare": false, 00:42:20.299 "compare_and_write": false, 00:42:20.299 "abort": false, 00:42:20.299 "seek_hole": true, 00:42:20.299 "seek_data": true, 00:42:20.299 "copy": false, 00:42:20.299 "nvme_iov_md": false 00:42:20.299 }, 00:42:20.299 "driver_specific": { 00:42:20.299 "lvol": { 00:42:20.299 "lvol_store_uuid": "f1fa679b-4ccd-4839-916a-0104a4ff10e7", 00:42:20.299 "base_bdev": "nvme0n1", 00:42:20.299 "thin_provision": true, 00:42:20.299 "num_allocated_clusters": 0, 00:42:20.299 "snapshot": false, 00:42:20.299 "clone": false, 00:42:20.299 "esnap_clone": false 00:42:20.299 } 00:42:20.299 } 00:42:20.299 } 00:42:20.299 ]' 00:42:20.299 07:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:42:20.557 07:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:42:20.557 07:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:42:20.557 07:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:42:20.557 07:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:42:20.557 07:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:42:20.557 07:50:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:42:20.557 07:50:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d f336b212-cb17-441f-9829-7bcc3159d752 --l2p_dram_limit 10' 00:42:20.557 07:50:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:42:20.557 07:50:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:42:20.557 07:50:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:42:20.557 07:50:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f336b212-cb17-441f-9829-7bcc3159d752 --l2p_dram_limit 10 -c nvc0n1p0 00:42:20.815 [2024-07-15 07:50:59.255772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:20.815 [2024-07-15 07:50:59.255844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:42:20.815 [2024-07-15 07:50:59.255884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:42:20.815 [2024-07-15 07:50:59.255901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:20.815 [2024-07-15 07:50:59.256005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:20.815 [2024-07-15 07:50:59.256028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:42:20.815 [2024-07-15 07:50:59.256041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:42:20.815 [2024-07-15 07:50:59.256055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:20.815 [2024-07-15 07:50:59.256086] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:42:20.815 [2024-07-15 07:50:59.257196] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:42:20.815 [2024-07-15 07:50:59.257232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:20.815 [2024-07-15 07:50:59.257253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:42:20.815 [2024-07-15 07:50:59.257267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.154 ms 00:42:20.815 [2024-07-15 07:50:59.257281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:20.815 [2024-07-15 07:50:59.257576] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1f0e24a3-c59b-4e19-8a54-562f5b275761 00:42:20.815 [2024-07-15 07:50:59.260152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:20.815 [2024-07-15 07:50:59.260193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:42:20.815 [2024-07-15 07:50:59.260230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:42:20.815 [2024-07-15 07:50:59.260243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:20.815 [2024-07-15 07:50:59.274553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:20.815 [2024-07-15 07:50:59.274643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:42:20.815 [2024-07-15 07:50:59.274685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.212 ms 00:42:20.815 [2024-07-15 07:50:59.274704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:20.815 [2024-07-15 07:50:59.274921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:20.815 [2024-07-15 07:50:59.274964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:42:20.815 [2024-07-15 07:50:59.274996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:42:20.815 [2024-07-15 07:50:59.275017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:20.815 [2024-07-15 07:50:59.275173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:20.815 [2024-07-15 07:50:59.275214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:42:20.815 [2024-07-15 07:50:59.275248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:42:20.815 [2024-07-15 07:50:59.275275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:20.815 [2024-07-15 07:50:59.275329] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:42:20.815 [2024-07-15 07:50:59.283498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:20.815 [2024-07-15 07:50:59.283572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:42:20.815 [2024-07-15 07:50:59.283601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.190 ms 00:42:20.815 [2024-07-15 07:50:59.283632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:20.815 [2024-07-15 07:50:59.283706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:20.815 [2024-07-15 07:50:59.283741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:42:20.815 [2024-07-15 07:50:59.283764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:42:20.815 [2024-07-15 07:50:59.283789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:20.815 [2024-07-15 07:50:59.283864] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:42:20.815 [2024-07-15 07:50:59.284139] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:42:20.815 [2024-07-15 07:50:59.284195] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:42:20.815 [2024-07-15 07:50:59.284228] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:42:20.815 [2024-07-15 07:50:59.284251] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:42:20.815 [2024-07-15 07:50:59.284277] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:42:20.815 [2024-07-15 07:50:59.284291] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:42:20.815 [2024-07-15 07:50:59.284310] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:42:20.815 [2024-07-15 07:50:59.284325] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:42:20.815 [2024-07-15 07:50:59.284341] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:42:20.816 [2024-07-15 07:50:59.284360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:20.816 [2024-07-15 07:50:59.284385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:42:20.816 [2024-07-15 07:50:59.284407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.500 ms 00:42:20.816 [2024-07-15 07:50:59.284433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:20.816 [2024-07-15 07:50:59.284581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:20.816 [2024-07-15 07:50:59.284617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:42:20.816 [2024-07-15 07:50:59.284641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:42:20.816 [2024-07-15 07:50:59.284667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:20.816 [2024-07-15 07:50:59.284832] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:42:20.816 [2024-07-15 07:50:59.284871] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:42:20.816 [2024-07-15 07:50:59.284926] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:20.816 [2024-07-15 07:50:59.284957] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:20.816 [2024-07-15 07:50:59.284982] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:42:20.816 [2024-07-15 07:50:59.285007] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:42:20.816 [2024-07-15 07:50:59.285027] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:42:20.816 [2024-07-15 07:50:59.285052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:42:20.816 [2024-07-15 07:50:59.285072] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:42:20.816 [2024-07-15 07:50:59.285096] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:20.816 [2024-07-15 07:50:59.285115] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:42:20.816 [2024-07-15 07:50:59.285137] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:42:20.816 [2024-07-15 07:50:59.285157] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:20.816 [2024-07-15 07:50:59.285184] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:42:20.816 [2024-07-15 07:50:59.285203] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:42:20.816 [2024-07-15 07:50:59.285224] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:20.816 [2024-07-15 07:50:59.285240] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:42:20.816 [2024-07-15 07:50:59.285265] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:42:20.816 [2024-07-15 07:50:59.285283] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:20.816 [2024-07-15 07:50:59.285308] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:42:20.816 [2024-07-15 07:50:59.285330] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:42:20.816 [2024-07-15 07:50:59.285355] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:20.816 [2024-07-15 07:50:59.285375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:42:20.816 [2024-07-15 07:50:59.285674] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:42:20.816 [2024-07-15 07:50:59.285710] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:20.816 [2024-07-15 07:50:59.285736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:42:20.816 [2024-07-15 07:50:59.285757] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:42:20.816 [2024-07-15 07:50:59.285781] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:20.816 [2024-07-15 07:50:59.285800] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:42:20.816 [2024-07-15 07:50:59.285825] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:42:20.816 [2024-07-15 07:50:59.285846] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:20.816 [2024-07-15 07:50:59.285870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:42:20.816 [2024-07-15 07:50:59.285889] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:42:20.816 [2024-07-15 07:50:59.285918] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:20.816 [2024-07-15 07:50:59.285938] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:42:20.816 [2024-07-15 07:50:59.285962] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:42:20.816 [2024-07-15 07:50:59.285981] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:20.816 [2024-07-15 07:50:59.286004] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:42:20.816 [2024-07-15 07:50:59.286028] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:42:20.816 [2024-07-15 07:50:59.286055] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:20.816 [2024-07-15 07:50:59.286074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:42:20.816 [2024-07-15 07:50:59.286095] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:42:20.816 [2024-07-15 07:50:59.286110] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:20.816 [2024-07-15 07:50:59.286130] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:42:20.816 [2024-07-15 07:50:59.286144] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:42:20.816 [2024-07-15 07:50:59.286159] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:20.816 [2024-07-15 07:50:59.286170] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:20.816 [2024-07-15 07:50:59.286185] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:42:20.816 [2024-07-15 07:50:59.286196] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:42:20.816 [2024-07-15 07:50:59.286212] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:42:20.816 [2024-07-15 07:50:59.286224] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:42:20.816 [2024-07-15 07:50:59.286239] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:42:20.816 [2024-07-15 07:50:59.286252] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:42:20.816 [2024-07-15 07:50:59.286271] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:42:20.816 [2024-07-15 07:50:59.286288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:20.816 [2024-07-15 07:50:59.286320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:42:20.816 [2024-07-15 07:50:59.286341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:42:20.816 [2024-07-15 07:50:59.286367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:42:20.816 [2024-07-15 07:50:59.286387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:42:20.816 [2024-07-15 07:50:59.286406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:42:20.816 [2024-07-15 07:50:59.286418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:42:20.816 [2024-07-15 07:50:59.286441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:42:20.816 [2024-07-15 07:50:59.286484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:42:20.816 [2024-07-15 07:50:59.286515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:42:20.816 [2024-07-15 07:50:59.286537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:42:20.816 [2024-07-15 07:50:59.286567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:42:20.816 [2024-07-15 07:50:59.286588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:42:20.816 [2024-07-15 07:50:59.286618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:42:20.816 [2024-07-15 07:50:59.286640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:42:20.816 [2024-07-15 07:50:59.286675] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:42:20.816 [2024-07-15 07:50:59.286697] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:20.816 [2024-07-15 07:50:59.286723] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:42:20.816 [2024-07-15 07:50:59.286745] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:42:20.816 [2024-07-15 07:50:59.286770] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:42:20.816 [2024-07-15 07:50:59.286791] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:42:20.816 [2024-07-15 07:50:59.286818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:20.816 [2024-07-15 07:50:59.286840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:42:20.816 [2024-07-15 07:50:59.286866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.065 ms 00:42:20.816 [2024-07-15 07:50:59.286888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:20.816 [2024-07-15 07:50:59.287043] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:42:20.816 [2024-07-15 07:50:59.287088] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:42:23.341 [2024-07-15 07:51:01.938708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:23.341 [2024-07-15 07:51:01.938790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:42:23.341 [2024-07-15 07:51:01.938817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2651.677 ms 00:42:23.341 [2024-07-15 07:51:01.938831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:23.598 [2024-07-15 07:51:01.983681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:23.598 [2024-07-15 07:51:01.983756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:42:23.598 [2024-07-15 07:51:01.983783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.503 ms 00:42:23.598 [2024-07-15 07:51:01.983796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:23.598 [2024-07-15 07:51:01.984056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:23.598 [2024-07-15 07:51:01.984076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:42:23.598 [2024-07-15 07:51:01.984093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:42:23.598 [2024-07-15 07:51:01.984110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:23.598 [2024-07-15 07:51:02.031891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:23.598 [2024-07-15 07:51:02.031967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:42:23.598 [2024-07-15 07:51:02.031993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.716 ms 00:42:23.598 [2024-07-15 07:51:02.032007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:23.598 [2024-07-15 07:51:02.032096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:23.598 [2024-07-15 07:51:02.032121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:42:23.598 [2024-07-15 07:51:02.032138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:42:23.598 [2024-07-15 07:51:02.032150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:23.598 [2024-07-15 07:51:02.032985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:23.598 [2024-07-15 07:51:02.033020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:42:23.598 [2024-07-15 07:51:02.033039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.733 ms 00:42:23.598 [2024-07-15 07:51:02.033051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:23.598 [2024-07-15 07:51:02.033237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:23.598 [2024-07-15 07:51:02.033255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:42:23.598 [2024-07-15 07:51:02.033274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:42:23.598 [2024-07-15 07:51:02.033286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:23.598 [2024-07-15 07:51:02.056612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:23.598 [2024-07-15 07:51:02.056684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:42:23.598 [2024-07-15 07:51:02.056708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.289 ms 00:42:23.598 [2024-07-15 07:51:02.056721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:23.598 [2024-07-15 07:51:02.072731] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:42:23.598 [2024-07-15 07:51:02.078212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:23.598 [2024-07-15 07:51:02.078257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:42:23.598 [2024-07-15 07:51:02.078278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.339 ms 00:42:23.598 [2024-07-15 07:51:02.078294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:23.598 [2024-07-15 07:51:02.166799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:23.598 [2024-07-15 07:51:02.166899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:42:23.598 [2024-07-15 07:51:02.166923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.444 ms 00:42:23.598 [2024-07-15 07:51:02.166948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:23.598 [2024-07-15 07:51:02.167223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:23.598 [2024-07-15 07:51:02.167253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:42:23.598 [2024-07-15 07:51:02.167267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:42:23.598 [2024-07-15 07:51:02.167286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:23.598 [2024-07-15 07:51:02.198791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:23.598 [2024-07-15 07:51:02.198853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:42:23.598 [2024-07-15 07:51:02.198875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.430 ms 00:42:23.598 [2024-07-15 07:51:02.198890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:23.857 [2024-07-15 07:51:02.230497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:23.857 [2024-07-15 07:51:02.230568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:42:23.857 [2024-07-15 07:51:02.230590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.524 ms 00:42:23.857 [2024-07-15 07:51:02.230606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:23.857 [2024-07-15 07:51:02.231646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:23.857 [2024-07-15 07:51:02.231697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:42:23.857 [2024-07-15 07:51:02.231716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.981 ms 00:42:23.857 [2024-07-15 07:51:02.231737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:23.857 [2024-07-15 07:51:02.325637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:23.857 [2024-07-15 07:51:02.325742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:42:23.857 [2024-07-15 07:51:02.325776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.812 ms 00:42:23.857 [2024-07-15 07:51:02.325797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:23.857 [2024-07-15 07:51:02.359267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:23.857 [2024-07-15 07:51:02.359325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:42:23.857 [2024-07-15 07:51:02.359348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.410 ms 00:42:23.857 [2024-07-15 07:51:02.359365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:23.857 [2024-07-15 07:51:02.390821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:23.857 [2024-07-15 07:51:02.390884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:42:23.857 [2024-07-15 07:51:02.390904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.398 ms 00:42:23.857 [2024-07-15 07:51:02.390920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:23.857 [2024-07-15 07:51:02.422077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:23.857 [2024-07-15 07:51:02.422132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:42:23.857 [2024-07-15 07:51:02.422152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.091 ms 00:42:23.857 [2024-07-15 07:51:02.422167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:23.857 [2024-07-15 07:51:02.422238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:23.857 [2024-07-15 07:51:02.422263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:42:23.857 [2024-07-15 07:51:02.422277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:42:23.857 [2024-07-15 07:51:02.422296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:23.857 [2024-07-15 07:51:02.422427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:23.857 [2024-07-15 07:51:02.422475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:42:23.857 [2024-07-15 07:51:02.422497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:42:23.857 [2024-07-15 07:51:02.422512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:23.857 [2024-07-15 07:51:02.423995] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3167.666 ms, result 0 00:42:23.857 { 00:42:23.857 "name": "ftl0", 00:42:23.857 "uuid": "1f0e24a3-c59b-4e19-8a54-562f5b275761" 00:42:23.857 } 00:42:23.857 07:51:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:42:23.857 07:51:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:42:24.424 07:51:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:42:24.424 07:51:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:42:24.424 07:51:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:42:24.424 /dev/nbd0 00:42:24.682 07:51:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:42:24.682 07:51:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:42:24.682 07:51:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # local i 00:42:24.682 07:51:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:42:24.682 07:51:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:42:24.682 07:51:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:42:24.682 07:51:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # break 00:42:24.682 07:51:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:42:24.682 07:51:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:42:24.682 07:51:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:42:24.682 1+0 records in 00:42:24.682 1+0 records out 00:42:24.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365514 s, 11.2 MB/s 00:42:24.682 07:51:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:42:24.682 07:51:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # size=4096 00:42:24.682 07:51:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:42:24.682 07:51:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:42:24.682 07:51:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # return 0 00:42:24.682 07:51:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:42:24.682 [2024-07-15 07:51:03.187158] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:42:24.682 [2024-07-15 07:51:03.187337] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84132 ] 00:42:24.940 [2024-07-15 07:51:03.369092] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:25.244 [2024-07-15 07:51:03.708473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:33.384  Copying: 161/1024 [MB] (161 MBps) Copying: 323/1024 [MB] (162 MBps) Copying: 483/1024 [MB] (160 MBps) Copying: 625/1024 [MB] (142 MBps) Copying: 788/1024 [MB] (162 MBps) Copying: 948/1024 [MB] (159 MBps) Copying: 1024/1024 [MB] (average 158 MBps) 00:42:33.384 00:42:33.384 07:51:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:42:35.911 07:51:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:42:35.911 [2024-07-15 07:51:14.200640] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:42:35.911 [2024-07-15 07:51:14.200802] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84248 ] 00:42:35.911 [2024-07-15 07:51:14.369112] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:36.168 [2024-07-15 07:51:14.639725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:43.106  Copying: 14/1024 [MB] (14 MBps) Copying: 28/1024 [MB] (14 MBps) Copying: 43/1024 [MB] (15 MBps) Copying: 58/1024 [MB] (15 MBps) Copying: 74/1024 [MB] (15 MBps) Copying: 90/1024 [MB] (15 MBps) Copying: 106/1024 [MB] (16 MBps) Copying: 121/1024 [MB] (15 MBps) Copying: 135/1024 [MB] (13 MBps) Copying: 150/1024 [MB] (14 MBps) Copying: 163/1024 [MB] (13 MBps) Copying: 178/1024 [MB] (14 MBps) Copying: 192/1024 [MB] (14 MBps) Copying: 208/1024 [MB] (15 MBps) Copying: 223/1024 [MB] (15 MBps) Copying: 238/1024 [MB] (15 MBps) Copying: 254/1024 [MB] (15 MBps) Copying: 270/1024 [MB] (16 MBps) Copying: 286/1024 [MB] (15 MBps) Copying: 302/1024 [MB] (15 MBps) Copying: 318/1024 [MB] (16 MBps) Copying: 334/1024 [MB] (15 MBps) Copying: 350/1024 [MB] (16 MBps) Copying: 366/1024 [MB] (16 MBps) Copying: 382/1024 [MB] (15 MBps) Copying: 398/1024 [MB] (16 MBps) Copying: 414/1024 [MB] (15 MBps) Copying: 430/1024 [MB] (16 MBps) Copying: 445/1024 [MB] (15 MBps) Copying: 461/1024 [MB] (15 MBps) Copying: 477/1024 [MB] (15 MBps) Copying: 493/1024 [MB] (16 MBps) Copying: 508/1024 [MB] (15 MBps) Copying: 524/1024 [MB] (15 MBps) Copying: 539/1024 [MB] (15 MBps) Copying: 557/1024 [MB] (17 MBps) Copying: 574/1024 [MB] (17 MBps) Copying: 591/1024 [MB] (17 MBps) Copying: 609/1024 [MB] (17 MBps) Copying: 626/1024 [MB] (17 MBps) Copying: 643/1024 [MB] (17 MBps) Copying: 660/1024 [MB] (16 MBps) Copying: 677/1024 [MB] (16 MBps) Copying: 693/1024 [MB] (16 MBps) Copying: 710/1024 [MB] (16 MBps) Copying: 726/1024 [MB] (15 MBps) Copying: 741/1024 [MB] (15 MBps) Copying: 757/1024 [MB] (15 MBps) Copying: 772/1024 [MB] (15 MBps) Copying: 787/1024 [MB] (15 MBps) Copying: 805/1024 [MB] (17 MBps) Copying: 822/1024 [MB] (16 MBps) Copying: 837/1024 [MB] (14 MBps) Copying: 851/1024 [MB] (14 MBps) Copying: 866/1024 [MB] (14 MBps) Copying: 881/1024 [MB] (14 MBps) Copying: 897/1024 [MB] (15 MBps) Copying: 914/1024 [MB] (16 MBps) Copying: 929/1024 [MB] (15 MBps) Copying: 944/1024 [MB] (14 MBps) Copying: 959/1024 [MB] (15 MBps) Copying: 974/1024 [MB] (14 MBps) Copying: 988/1024 [MB] (14 MBps) Copying: 1004/1024 [MB] (15 MBps) Copying: 1020/1024 [MB] (15 MBps) Copying: 1024/1024 [MB] (average 15 MBps) 00:43:43.106 00:43:43.106 07:52:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:43:43.106 07:52:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:43:43.364 07:52:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:43:43.621 [2024-07-15 07:52:22.134734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:43.621 [2024-07-15 07:52:22.134844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:43:43.621 [2024-07-15 07:52:22.134887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:43:43.621 [2024-07-15 07:52:22.134905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.621 [2024-07-15 07:52:22.134949] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:43:43.621 [2024-07-15 07:52:22.139122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:43.621 [2024-07-15 07:52:22.139164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:43:43.621 [2024-07-15 07:52:22.139182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.146 ms 00:43:43.621 [2024-07-15 07:52:22.139199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.621 [2024-07-15 07:52:22.141412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:43.621 [2024-07-15 07:52:22.141483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:43:43.621 [2024-07-15 07:52:22.141503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.171 ms 00:43:43.621 [2024-07-15 07:52:22.141519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.621 [2024-07-15 07:52:22.160135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:43.621 [2024-07-15 07:52:22.160217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:43:43.621 [2024-07-15 07:52:22.160236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.585 ms 00:43:43.621 [2024-07-15 07:52:22.160252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.621 [2024-07-15 07:52:22.166880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:43.621 [2024-07-15 07:52:22.166947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:43:43.621 [2024-07-15 07:52:22.166964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.582 ms 00:43:43.621 [2024-07-15 07:52:22.166979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.621 [2024-07-15 07:52:22.200260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:43.621 [2024-07-15 07:52:22.200314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:43:43.621 [2024-07-15 07:52:22.200333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.188 ms 00:43:43.621 [2024-07-15 07:52:22.200348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.621 [2024-07-15 07:52:22.220273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:43.621 [2024-07-15 07:52:22.220338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:43:43.621 [2024-07-15 07:52:22.220363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.867 ms 00:43:43.621 [2024-07-15 07:52:22.220378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.621 [2024-07-15 07:52:22.220635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:43.621 [2024-07-15 07:52:22.220666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:43:43.621 [2024-07-15 07:52:22.220681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:43:43.621 [2024-07-15 07:52:22.220696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.880 [2024-07-15 07:52:22.252547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:43.880 [2024-07-15 07:52:22.252626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:43:43.880 [2024-07-15 07:52:22.252647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.823 ms 00:43:43.880 [2024-07-15 07:52:22.252663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.880 [2024-07-15 07:52:22.284136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:43.880 [2024-07-15 07:52:22.284208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:43:43.880 [2024-07-15 07:52:22.284229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.413 ms 00:43:43.880 [2024-07-15 07:52:22.284244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.880 [2024-07-15 07:52:22.315548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:43.880 [2024-07-15 07:52:22.315605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:43:43.880 [2024-07-15 07:52:22.315624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.246 ms 00:43:43.880 [2024-07-15 07:52:22.315638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.880 [2024-07-15 07:52:22.346133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:43.880 [2024-07-15 07:52:22.346221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:43:43.880 [2024-07-15 07:52:22.346243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.330 ms 00:43:43.880 [2024-07-15 07:52:22.346260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.880 [2024-07-15 07:52:22.346330] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:43:43.880 [2024-07-15 07:52:22.346363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:43:43.880 [2024-07-15 07:52:22.346379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:43:43.880 [2024-07-15 07:52:22.346396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:43:43.880 [2024-07-15 07:52:22.346409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:43:43.880 [2024-07-15 07:52:22.346424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:43:43.880 [2024-07-15 07:52:22.346437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:43:43.880 [2024-07-15 07:52:22.346473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:43:43.880 [2024-07-15 07:52:22.346495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:43:43.880 [2024-07-15 07:52:22.346516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:43:43.880 [2024-07-15 07:52:22.346540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:43:43.880 [2024-07-15 07:52:22.346555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:43:43.880 [2024-07-15 07:52:22.346567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:43:43.880 [2024-07-15 07:52:22.346582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:43:43.880 [2024-07-15 07:52:22.346595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:43:43.880 [2024-07-15 07:52:22.346612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:43:43.880 [2024-07-15 07:52:22.346625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:43:43.880 [2024-07-15 07:52:22.346641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:43:43.880 [2024-07-15 07:52:22.346653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:43:43.880 [2024-07-15 07:52:22.346668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:43:43.880 [2024-07-15 07:52:22.346681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:43:43.880 [2024-07-15 07:52:22.346698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:43:43.880 [2024-07-15 07:52:22.346711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:43:43.880 [2024-07-15 07:52:22.346726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:43:43.880 [2024-07-15 07:52:22.346738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:43:43.880 [2024-07-15 07:52:22.346756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.346768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.346783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.346817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.346844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.346856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.346871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.346883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.346898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.346910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.346939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.346954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.346970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.346982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.346998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:43:43.881 [2024-07-15 07:52:22.347950] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:43:43.881 [2024-07-15 07:52:22.347963] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1f0e24a3-c59b-4e19-8a54-562f5b275761 00:43:43.881 [2024-07-15 07:52:22.347979] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:43:43.881 [2024-07-15 07:52:22.347991] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:43:43.881 [2024-07-15 07:52:22.348017] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:43:43.881 [2024-07-15 07:52:22.348029] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:43:43.881 [2024-07-15 07:52:22.348043] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:43:43.881 [2024-07-15 07:52:22.348055] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:43:43.881 [2024-07-15 07:52:22.348079] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:43:43.881 [2024-07-15 07:52:22.348090] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:43:43.881 [2024-07-15 07:52:22.348103] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:43:43.881 [2024-07-15 07:52:22.348116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:43.881 [2024-07-15 07:52:22.348131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:43:43.881 [2024-07-15 07:52:22.348144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.789 ms 00:43:43.881 [2024-07-15 07:52:22.348158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.881 [2024-07-15 07:52:22.366029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:43.881 [2024-07-15 07:52:22.366084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:43:43.881 [2024-07-15 07:52:22.366103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.794 ms 00:43:43.881 [2024-07-15 07:52:22.366118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.881 [2024-07-15 07:52:22.366698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:43.881 [2024-07-15 07:52:22.366736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:43:43.881 [2024-07-15 07:52:22.366752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:43:43.881 [2024-07-15 07:52:22.366767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.881 [2024-07-15 07:52:22.423867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:43.881 [2024-07-15 07:52:22.423941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:43:43.882 [2024-07-15 07:52:22.423962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:43.882 [2024-07-15 07:52:22.423978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.882 [2024-07-15 07:52:22.424083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:43.882 [2024-07-15 07:52:22.424103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:43:43.882 [2024-07-15 07:52:22.424117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:43.882 [2024-07-15 07:52:22.424131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.882 [2024-07-15 07:52:22.424267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:43.882 [2024-07-15 07:52:22.424309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:43:43.882 [2024-07-15 07:52:22.424324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:43.882 [2024-07-15 07:52:22.424339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.882 [2024-07-15 07:52:22.424367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:43.882 [2024-07-15 07:52:22.424389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:43:43.882 [2024-07-15 07:52:22.424401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:43.882 [2024-07-15 07:52:22.424416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:44.158 [2024-07-15 07:52:22.538353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:44.158 [2024-07-15 07:52:22.538440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:43:44.158 [2024-07-15 07:52:22.538472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:44.158 [2024-07-15 07:52:22.538490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:44.158 [2024-07-15 07:52:22.630220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:44.158 [2024-07-15 07:52:22.630308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:43:44.158 [2024-07-15 07:52:22.630331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:44.158 [2024-07-15 07:52:22.630348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:44.158 [2024-07-15 07:52:22.630515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:44.158 [2024-07-15 07:52:22.630544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:43:44.158 [2024-07-15 07:52:22.630563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:44.159 [2024-07-15 07:52:22.630579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:44.159 [2024-07-15 07:52:22.630654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:44.159 [2024-07-15 07:52:22.630681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:43:44.159 [2024-07-15 07:52:22.630695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:44.159 [2024-07-15 07:52:22.630709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:44.159 [2024-07-15 07:52:22.630859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:44.159 [2024-07-15 07:52:22.630895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:43:44.159 [2024-07-15 07:52:22.630910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:44.159 [2024-07-15 07:52:22.630928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:44.159 [2024-07-15 07:52:22.630986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:44.159 [2024-07-15 07:52:22.631009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:43:44.159 [2024-07-15 07:52:22.631023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:44.159 [2024-07-15 07:52:22.631037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:44.159 [2024-07-15 07:52:22.631096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:44.159 [2024-07-15 07:52:22.631123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:43:44.159 [2024-07-15 07:52:22.631136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:44.159 [2024-07-15 07:52:22.631154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:44.159 [2024-07-15 07:52:22.631219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:44.159 [2024-07-15 07:52:22.631244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:43:44.159 [2024-07-15 07:52:22.631258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:44.159 [2024-07-15 07:52:22.631272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:44.159 [2024-07-15 07:52:22.631493] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 496.690 ms, result 0 00:43:44.159 true 00:43:44.159 07:52:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 83987 00:43:44.159 07:52:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid83987 00:43:44.159 07:52:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:43:44.159 [2024-07-15 07:52:22.759862] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:43:44.159 [2024-07-15 07:52:22.760074] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84923 ] 00:43:44.417 [2024-07-15 07:52:22.932742] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:44.676 [2024-07-15 07:52:23.212322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:52.795  Copying: 161/1024 [MB] (161 MBps) Copying: 328/1024 [MB] (167 MBps) Copying: 493/1024 [MB] (165 MBps) Copying: 661/1024 [MB] (167 MBps) Copying: 823/1024 [MB] (161 MBps) Copying: 988/1024 [MB] (165 MBps) Copying: 1024/1024 [MB] (average 164 MBps) 00:43:52.795 00:43:52.795 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 83987 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:43:52.795 07:52:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:43:52.795 [2024-07-15 07:52:31.281691] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:43:52.795 [2024-07-15 07:52:31.281892] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85004 ] 00:43:53.053 [2024-07-15 07:52:31.464013] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:53.311 [2024-07-15 07:52:31.742736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:53.569 [2024-07-15 07:52:32.135489] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:43:53.569 [2024-07-15 07:52:32.135578] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:43:53.827 [2024-07-15 07:52:32.203570] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:43:53.827 [2024-07-15 07:52:32.204087] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:43:53.827 [2024-07-15 07:52:32.204320] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:43:54.141 [2024-07-15 07:52:32.473696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.141 [2024-07-15 07:52:32.473760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:43:54.141 [2024-07-15 07:52:32.473781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:43:54.141 [2024-07-15 07:52:32.473794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.141 [2024-07-15 07:52:32.473872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.141 [2024-07-15 07:52:32.473894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:43:54.141 [2024-07-15 07:52:32.473907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:43:54.141 [2024-07-15 07:52:32.473925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.141 [2024-07-15 07:52:32.473958] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:43:54.141 [2024-07-15 07:52:32.474872] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:43:54.141 [2024-07-15 07:52:32.474900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.141 [2024-07-15 07:52:32.474914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:43:54.141 [2024-07-15 07:52:32.474928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.950 ms 00:43:54.141 [2024-07-15 07:52:32.474940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.141 [2024-07-15 07:52:32.477715] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:43:54.141 [2024-07-15 07:52:32.496326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.141 [2024-07-15 07:52:32.496379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:43:54.141 [2024-07-15 07:52:32.496398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.613 ms 00:43:54.141 [2024-07-15 07:52:32.496418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.141 [2024-07-15 07:52:32.496529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.141 [2024-07-15 07:52:32.496551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:43:54.141 [2024-07-15 07:52:32.496565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:43:54.141 [2024-07-15 07:52:32.496577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.141 [2024-07-15 07:52:32.509630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.141 [2024-07-15 07:52:32.509683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:43:54.141 [2024-07-15 07:52:32.509709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.954 ms 00:43:54.141 [2024-07-15 07:52:32.509722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.141 [2024-07-15 07:52:32.509840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.141 [2024-07-15 07:52:32.509861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:43:54.141 [2024-07-15 07:52:32.509875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:43:54.141 [2024-07-15 07:52:32.509886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.141 [2024-07-15 07:52:32.509983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.141 [2024-07-15 07:52:32.510004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:43:54.141 [2024-07-15 07:52:32.510017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:43:54.141 [2024-07-15 07:52:32.510033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.141 [2024-07-15 07:52:32.510071] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:43:54.141 [2024-07-15 07:52:32.516082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.141 [2024-07-15 07:52:32.516120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:43:54.141 [2024-07-15 07:52:32.516136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.021 ms 00:43:54.141 [2024-07-15 07:52:32.516148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.141 [2024-07-15 07:52:32.516193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.141 [2024-07-15 07:52:32.516211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:43:54.141 [2024-07-15 07:52:32.516225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:43:54.141 [2024-07-15 07:52:32.516237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.141 [2024-07-15 07:52:32.516285] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:43:54.141 [2024-07-15 07:52:32.516322] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:43:54.141 [2024-07-15 07:52:32.516375] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:43:54.141 [2024-07-15 07:52:32.516399] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:43:54.141 [2024-07-15 07:52:32.516524] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:43:54.141 [2024-07-15 07:52:32.516545] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:43:54.141 [2024-07-15 07:52:32.516560] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:43:54.141 [2024-07-15 07:52:32.516576] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:43:54.141 [2024-07-15 07:52:32.516590] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:43:54.141 [2024-07-15 07:52:32.516604] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:43:54.141 [2024-07-15 07:52:32.516623] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:43:54.141 [2024-07-15 07:52:32.516634] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:43:54.141 [2024-07-15 07:52:32.516645] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:43:54.141 [2024-07-15 07:52:32.516658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.141 [2024-07-15 07:52:32.516670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:43:54.141 [2024-07-15 07:52:32.516683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:43:54.141 [2024-07-15 07:52:32.516694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.141 [2024-07-15 07:52:32.516791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.141 [2024-07-15 07:52:32.516814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:43:54.141 [2024-07-15 07:52:32.516827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:43:54.141 [2024-07-15 07:52:32.516839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.141 [2024-07-15 07:52:32.516958] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:43:54.141 [2024-07-15 07:52:32.516976] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:43:54.141 [2024-07-15 07:52:32.516989] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:43:54.141 [2024-07-15 07:52:32.517002] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:54.141 [2024-07-15 07:52:32.517016] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:43:54.141 [2024-07-15 07:52:32.517028] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:43:54.141 [2024-07-15 07:52:32.517039] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:43:54.141 [2024-07-15 07:52:32.517051] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:43:54.141 [2024-07-15 07:52:32.517062] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:43:54.141 [2024-07-15 07:52:32.517074] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:43:54.141 [2024-07-15 07:52:32.517086] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:43:54.142 [2024-07-15 07:52:32.517097] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:43:54.142 [2024-07-15 07:52:32.517108] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:43:54.142 [2024-07-15 07:52:32.517119] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:43:54.142 [2024-07-15 07:52:32.517130] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:43:54.142 [2024-07-15 07:52:32.517140] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:54.142 [2024-07-15 07:52:32.517167] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:43:54.142 [2024-07-15 07:52:32.517178] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:43:54.142 [2024-07-15 07:52:32.517189] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:54.142 [2024-07-15 07:52:32.517210] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:43:54.142 [2024-07-15 07:52:32.517221] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:43:54.142 [2024-07-15 07:52:32.517241] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:54.142 [2024-07-15 07:52:32.517253] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:43:54.142 [2024-07-15 07:52:32.517264] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:43:54.142 [2024-07-15 07:52:32.517274] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:54.142 [2024-07-15 07:52:32.517285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:43:54.142 [2024-07-15 07:52:32.517296] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:43:54.142 [2024-07-15 07:52:32.517306] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:54.142 [2024-07-15 07:52:32.517317] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:43:54.142 [2024-07-15 07:52:32.517328] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:43:54.142 [2024-07-15 07:52:32.517338] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:54.142 [2024-07-15 07:52:32.517349] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:43:54.142 [2024-07-15 07:52:32.517360] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:43:54.142 [2024-07-15 07:52:32.517370] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:43:54.142 [2024-07-15 07:52:32.517381] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:43:54.142 [2024-07-15 07:52:32.517393] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:43:54.142 [2024-07-15 07:52:32.517403] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:43:54.142 [2024-07-15 07:52:32.517415] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:43:54.142 [2024-07-15 07:52:32.517426] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:43:54.142 [2024-07-15 07:52:32.517465] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:54.142 [2024-07-15 07:52:32.517480] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:43:54.142 [2024-07-15 07:52:32.517492] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:43:54.142 [2024-07-15 07:52:32.517503] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:54.142 [2024-07-15 07:52:32.517514] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:43:54.142 [2024-07-15 07:52:32.517526] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:43:54.142 [2024-07-15 07:52:32.517538] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:43:54.142 [2024-07-15 07:52:32.517550] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:54.142 [2024-07-15 07:52:32.517562] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:43:54.142 [2024-07-15 07:52:32.517573] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:43:54.142 [2024-07-15 07:52:32.517594] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:43:54.142 [2024-07-15 07:52:32.517605] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:43:54.142 [2024-07-15 07:52:32.517616] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:43:54.142 [2024-07-15 07:52:32.517627] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:43:54.142 [2024-07-15 07:52:32.517640] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:43:54.142 [2024-07-15 07:52:32.517666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:43:54.142 [2024-07-15 07:52:32.517679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:43:54.142 [2024-07-15 07:52:32.517691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:43:54.142 [2024-07-15 07:52:32.517703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:43:54.142 [2024-07-15 07:52:32.517715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:43:54.142 [2024-07-15 07:52:32.517727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:43:54.142 [2024-07-15 07:52:32.517738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:43:54.142 [2024-07-15 07:52:32.517750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:43:54.142 [2024-07-15 07:52:32.517762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:43:54.142 [2024-07-15 07:52:32.517773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:43:54.142 [2024-07-15 07:52:32.517785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:43:54.142 [2024-07-15 07:52:32.517806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:43:54.142 [2024-07-15 07:52:32.517819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:43:54.142 [2024-07-15 07:52:32.517830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:43:54.142 [2024-07-15 07:52:32.517842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:43:54.142 [2024-07-15 07:52:32.517853] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:43:54.142 [2024-07-15 07:52:32.517867] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:43:54.142 [2024-07-15 07:52:32.517880] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:43:54.142 [2024-07-15 07:52:32.517892] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:43:54.142 [2024-07-15 07:52:32.517904] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:43:54.142 [2024-07-15 07:52:32.517915] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:43:54.142 [2024-07-15 07:52:32.517927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.142 [2024-07-15 07:52:32.517940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:43:54.142 [2024-07-15 07:52:32.517953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.035 ms 00:43:54.142 [2024-07-15 07:52:32.517964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.142 [2024-07-15 07:52:32.582101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.142 [2024-07-15 07:52:32.582188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:43:54.142 [2024-07-15 07:52:32.582219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.062 ms 00:43:54.142 [2024-07-15 07:52:32.582233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.142 [2024-07-15 07:52:32.582385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.142 [2024-07-15 07:52:32.582404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:43:54.142 [2024-07-15 07:52:32.582419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:43:54.142 [2024-07-15 07:52:32.582439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.142 [2024-07-15 07:52:32.632670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.142 [2024-07-15 07:52:32.632738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:43:54.142 [2024-07-15 07:52:32.632760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.092 ms 00:43:54.142 [2024-07-15 07:52:32.632772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.142 [2024-07-15 07:52:32.632855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.142 [2024-07-15 07:52:32.632879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:43:54.142 [2024-07-15 07:52:32.632894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:43:54.142 [2024-07-15 07:52:32.632906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.142 [2024-07-15 07:52:32.633824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.142 [2024-07-15 07:52:32.633851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:43:54.142 [2024-07-15 07:52:32.633866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.825 ms 00:43:54.142 [2024-07-15 07:52:32.633879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.142 [2024-07-15 07:52:32.634075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.142 [2024-07-15 07:52:32.634096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:43:54.142 [2024-07-15 07:52:32.634115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:43:54.142 [2024-07-15 07:52:32.634126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.142 [2024-07-15 07:52:32.656059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.142 [2024-07-15 07:52:32.656105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:43:54.142 [2024-07-15 07:52:32.656122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.903 ms 00:43:54.142 [2024-07-15 07:52:32.656134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.142 [2024-07-15 07:52:32.674613] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:43:54.142 [2024-07-15 07:52:32.674679] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:43:54.142 [2024-07-15 07:52:32.674699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.142 [2024-07-15 07:52:32.674712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:43:54.142 [2024-07-15 07:52:32.674726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.392 ms 00:43:54.142 [2024-07-15 07:52:32.674737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.400 [2024-07-15 07:52:32.705498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.400 [2024-07-15 07:52:32.705546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:43:54.400 [2024-07-15 07:52:32.705564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.712 ms 00:43:54.400 [2024-07-15 07:52:32.705576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.400 [2024-07-15 07:52:32.721398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.400 [2024-07-15 07:52:32.721450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:43:54.400 [2024-07-15 07:52:32.721482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.757 ms 00:43:54.400 [2024-07-15 07:52:32.721496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.400 [2024-07-15 07:52:32.737125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.400 [2024-07-15 07:52:32.737177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:43:54.400 [2024-07-15 07:52:32.737194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.583 ms 00:43:54.400 [2024-07-15 07:52:32.737205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.400 [2024-07-15 07:52:32.738119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.400 [2024-07-15 07:52:32.738151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:43:54.400 [2024-07-15 07:52:32.738171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.774 ms 00:43:54.400 [2024-07-15 07:52:32.738183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.400 [2024-07-15 07:52:32.826231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.400 [2024-07-15 07:52:32.826368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:43:54.400 [2024-07-15 07:52:32.826392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.003 ms 00:43:54.400 [2024-07-15 07:52:32.826405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.401 [2024-07-15 07:52:32.839259] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:43:54.401 [2024-07-15 07:52:32.843192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.401 [2024-07-15 07:52:32.843228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:43:54.401 [2024-07-15 07:52:32.843245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.697 ms 00:43:54.401 [2024-07-15 07:52:32.843257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.401 [2024-07-15 07:52:32.843374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.401 [2024-07-15 07:52:32.843395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:43:54.401 [2024-07-15 07:52:32.843414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:43:54.401 [2024-07-15 07:52:32.843427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.401 [2024-07-15 07:52:32.843557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.401 [2024-07-15 07:52:32.843583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:43:54.401 [2024-07-15 07:52:32.843597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:43:54.401 [2024-07-15 07:52:32.843609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.401 [2024-07-15 07:52:32.843645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.401 [2024-07-15 07:52:32.843661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:43:54.401 [2024-07-15 07:52:32.843674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:43:54.401 [2024-07-15 07:52:32.843693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.401 [2024-07-15 07:52:32.843738] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:43:54.401 [2024-07-15 07:52:32.843756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.401 [2024-07-15 07:52:32.843769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:43:54.401 [2024-07-15 07:52:32.843781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:43:54.401 [2024-07-15 07:52:32.843793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.401 [2024-07-15 07:52:32.877429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.401 [2024-07-15 07:52:32.877530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:43:54.401 [2024-07-15 07:52:32.877556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.609 ms 00:43:54.401 [2024-07-15 07:52:32.877574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.401 [2024-07-15 07:52:32.877695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:54.401 [2024-07-15 07:52:32.877716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:43:54.401 [2024-07-15 07:52:32.877729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:43:54.401 [2024-07-15 07:52:32.877740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:54.401 [2024-07-15 07:52:32.879443] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 405.118 ms, result 0 00:44:37.154  Copying: 25/1024 [MB] (25 MBps) Copying: 50/1024 [MB] (25 MBps) Copying: 76/1024 [MB] (25 MBps) Copying: 102/1024 [MB] (25 MBps) Copying: 127/1024 [MB] (25 MBps) Copying: 154/1024 [MB] (26 MBps) Copying: 180/1024 [MB] (26 MBps) Copying: 206/1024 [MB] (25 MBps) Copying: 230/1024 [MB] (24 MBps) Copying: 255/1024 [MB] (25 MBps) Copying: 280/1024 [MB] (24 MBps) Copying: 305/1024 [MB] (25 MBps) Copying: 330/1024 [MB] (25 MBps) Copying: 355/1024 [MB] (24 MBps) Copying: 380/1024 [MB] (25 MBps) Copying: 405/1024 [MB] (25 MBps) Copying: 431/1024 [MB] (25 MBps) Copying: 455/1024 [MB] (24 MBps) Copying: 480/1024 [MB] (24 MBps) Copying: 505/1024 [MB] (25 MBps) Copying: 530/1024 [MB] (24 MBps) Copying: 554/1024 [MB] (24 MBps) Copying: 579/1024 [MB] (25 MBps) Copying: 605/1024 [MB] (25 MBps) Copying: 630/1024 [MB] (25 MBps) Copying: 654/1024 [MB] (24 MBps) Copying: 679/1024 [MB] (25 MBps) Copying: 705/1024 [MB] (25 MBps) Copying: 730/1024 [MB] (25 MBps) Copying: 755/1024 [MB] (24 MBps) Copying: 779/1024 [MB] (24 MBps) Copying: 803/1024 [MB] (24 MBps) Copying: 826/1024 [MB] (23 MBps) Copying: 849/1024 [MB] (23 MBps) Copying: 873/1024 [MB] (23 MBps) Copying: 897/1024 [MB] (23 MBps) Copying: 922/1024 [MB] (24 MBps) Copying: 945/1024 [MB] (22 MBps) Copying: 967/1024 [MB] (22 MBps) Copying: 991/1024 [MB] (23 MBps) Copying: 1014/1024 [MB] (23 MBps) Copying: 1048124/1048576 [kB] (8888 kBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-07-15 07:53:15.469290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:37.154 [2024-07-15 07:53:15.469386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:44:37.154 [2024-07-15 07:53:15.469422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:44:37.154 [2024-07-15 07:53:15.469438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.154 [2024-07-15 07:53:15.470528] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:44:37.154 [2024-07-15 07:53:15.476859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:37.154 [2024-07-15 07:53:15.476917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:44:37.154 [2024-07-15 07:53:15.476949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.293 ms 00:44:37.154 [2024-07-15 07:53:15.476962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.154 [2024-07-15 07:53:15.490460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:37.154 [2024-07-15 07:53:15.490522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:44:37.154 [2024-07-15 07:53:15.490566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.019 ms 00:44:37.154 [2024-07-15 07:53:15.490579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.154 [2024-07-15 07:53:15.513682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:37.154 [2024-07-15 07:53:15.513729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:44:37.154 [2024-07-15 07:53:15.513748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.077 ms 00:44:37.154 [2024-07-15 07:53:15.513761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.154 [2024-07-15 07:53:15.520119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:37.154 [2024-07-15 07:53:15.520207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:44:37.154 [2024-07-15 07:53:15.520239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.317 ms 00:44:37.154 [2024-07-15 07:53:15.520260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.154 [2024-07-15 07:53:15.551624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:37.154 [2024-07-15 07:53:15.551681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:44:37.154 [2024-07-15 07:53:15.551715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.311 ms 00:44:37.154 [2024-07-15 07:53:15.551727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.154 [2024-07-15 07:53:15.569911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:37.154 [2024-07-15 07:53:15.569970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:44:37.154 [2024-07-15 07:53:15.570004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.138 ms 00:44:37.154 [2024-07-15 07:53:15.570016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.154 [2024-07-15 07:53:15.678197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:37.154 [2024-07-15 07:53:15.678320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:44:37.154 [2024-07-15 07:53:15.678348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.122 ms 00:44:37.154 [2024-07-15 07:53:15.678362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.154 [2024-07-15 07:53:15.709578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:37.154 [2024-07-15 07:53:15.709635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:44:37.154 [2024-07-15 07:53:15.709667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.192 ms 00:44:37.154 [2024-07-15 07:53:15.709679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.154 [2024-07-15 07:53:15.738779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:37.154 [2024-07-15 07:53:15.738834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:44:37.154 [2024-07-15 07:53:15.738851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.056 ms 00:44:37.154 [2024-07-15 07:53:15.738864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.413 [2024-07-15 07:53:15.770197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:37.413 [2024-07-15 07:53:15.770269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:44:37.413 [2024-07-15 07:53:15.770301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.288 ms 00:44:37.413 [2024-07-15 07:53:15.770329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.413 [2024-07-15 07:53:15.801070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:37.413 [2024-07-15 07:53:15.801144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:44:37.413 [2024-07-15 07:53:15.801179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.640 ms 00:44:37.413 [2024-07-15 07:53:15.801190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.413 [2024-07-15 07:53:15.801235] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:44:37.413 [2024-07-15 07:53:15.801259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129024 / 261120 wr_cnt: 1 state: open 00:44:37.413 [2024-07-15 07:53:15.801276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:44:37.413 [2024-07-15 07:53:15.801290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:44:37.413 [2024-07-15 07:53:15.801303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:44:37.413 [2024-07-15 07:53:15.801315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:44:37.413 [2024-07-15 07:53:15.801327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:44:37.413 [2024-07-15 07:53:15.801340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:44:37.413 [2024-07-15 07:53:15.801352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:44:37.413 [2024-07-15 07:53:15.801365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:44:37.413 [2024-07-15 07:53:15.801378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:44:37.413 [2024-07-15 07:53:15.801390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:44:37.413 [2024-07-15 07:53:15.801403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:44:37.413 [2024-07-15 07:53:15.801415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:44:37.413 [2024-07-15 07:53:15.801428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:44:37.413 [2024-07-15 07:53:15.801441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:44:37.413 [2024-07-15 07:53:15.801466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.801991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:44:37.414 [2024-07-15 07:53:15.802442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:44:37.415 [2024-07-15 07:53:15.802464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:44:37.415 [2024-07-15 07:53:15.802478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:44:37.415 [2024-07-15 07:53:15.802491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:44:37.415 [2024-07-15 07:53:15.802504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:44:37.415 [2024-07-15 07:53:15.802517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:44:37.415 [2024-07-15 07:53:15.802530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:44:37.415 [2024-07-15 07:53:15.802544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:44:37.415 [2024-07-15 07:53:15.802567] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:44:37.415 [2024-07-15 07:53:15.802579] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1f0e24a3-c59b-4e19-8a54-562f5b275761 00:44:37.415 [2024-07-15 07:53:15.802592] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129024 00:44:37.415 [2024-07-15 07:53:15.802605] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129984 00:44:37.415 [2024-07-15 07:53:15.802625] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129024 00:44:37.415 [2024-07-15 07:53:15.802643] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:44:37.415 [2024-07-15 07:53:15.802655] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:44:37.415 [2024-07-15 07:53:15.802668] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:44:37.415 [2024-07-15 07:53:15.802680] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:44:37.415 [2024-07-15 07:53:15.802691] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:44:37.415 [2024-07-15 07:53:15.802702] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:44:37.415 [2024-07-15 07:53:15.802714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:37.415 [2024-07-15 07:53:15.802729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:44:37.415 [2024-07-15 07:53:15.802755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.480 ms 00:44:37.415 [2024-07-15 07:53:15.802768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.415 [2024-07-15 07:53:15.820422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:37.415 [2024-07-15 07:53:15.820486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:44:37.415 [2024-07-15 07:53:15.820529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.598 ms 00:44:37.415 [2024-07-15 07:53:15.820541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.415 [2024-07-15 07:53:15.821062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:37.415 [2024-07-15 07:53:15.821089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:44:37.415 [2024-07-15 07:53:15.821104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.496 ms 00:44:37.415 [2024-07-15 07:53:15.821116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.415 [2024-07-15 07:53:15.861087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:37.415 [2024-07-15 07:53:15.861160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:37.415 [2024-07-15 07:53:15.861193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:37.415 [2024-07-15 07:53:15.861205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.415 [2024-07-15 07:53:15.861285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:37.415 [2024-07-15 07:53:15.861301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:37.415 [2024-07-15 07:53:15.861313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:37.415 [2024-07-15 07:53:15.861324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.415 [2024-07-15 07:53:15.861426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:37.415 [2024-07-15 07:53:15.861470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:37.415 [2024-07-15 07:53:15.861484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:37.415 [2024-07-15 07:53:15.861513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.415 [2024-07-15 07:53:15.861543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:37.415 [2024-07-15 07:53:15.861558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:37.415 [2024-07-15 07:53:15.861570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:37.415 [2024-07-15 07:53:15.861581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.415 [2024-07-15 07:53:15.965141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:37.415 [2024-07-15 07:53:15.965245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:37.415 [2024-07-15 07:53:15.965282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:37.415 [2024-07-15 07:53:15.965294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.673 [2024-07-15 07:53:16.054547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:37.673 [2024-07-15 07:53:16.054616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:37.673 [2024-07-15 07:53:16.054653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:37.673 [2024-07-15 07:53:16.054666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.673 [2024-07-15 07:53:16.054753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:37.673 [2024-07-15 07:53:16.054776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:37.673 [2024-07-15 07:53:16.054798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:37.673 [2024-07-15 07:53:16.054830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.673 [2024-07-15 07:53:16.054895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:37.673 [2024-07-15 07:53:16.054912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:37.673 [2024-07-15 07:53:16.054924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:37.674 [2024-07-15 07:53:16.054936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.674 [2024-07-15 07:53:16.055270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:37.674 [2024-07-15 07:53:16.055306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:37.674 [2024-07-15 07:53:16.055319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:37.674 [2024-07-15 07:53:16.055338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.674 [2024-07-15 07:53:16.055392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:37.674 [2024-07-15 07:53:16.055411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:44:37.674 [2024-07-15 07:53:16.055423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:37.674 [2024-07-15 07:53:16.055434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.674 [2024-07-15 07:53:16.055544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:37.674 [2024-07-15 07:53:16.055564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:37.674 [2024-07-15 07:53:16.055577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:37.674 [2024-07-15 07:53:16.055596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.674 [2024-07-15 07:53:16.055674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:37.674 [2024-07-15 07:53:16.055691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:37.674 [2024-07-15 07:53:16.055705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:37.674 [2024-07-15 07:53:16.055718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:37.674 [2024-07-15 07:53:16.055888] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 589.575 ms, result 0 00:44:39.575 00:44:39.575 00:44:39.575 07:53:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:44:41.530 07:53:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:44:41.530 [2024-07-15 07:53:19.943985] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:44:41.530 [2024-07-15 07:53:19.944148] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85477 ] 00:44:41.530 [2024-07-15 07:53:20.115657] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:42.097 [2024-07-15 07:53:20.416436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:42.355 [2024-07-15 07:53:20.808526] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:42.355 [2024-07-15 07:53:20.808655] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:42.615 [2024-07-15 07:53:20.977127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:42.615 [2024-07-15 07:53:20.977196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:44:42.615 [2024-07-15 07:53:20.977218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:44:42.615 [2024-07-15 07:53:20.977230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:42.615 [2024-07-15 07:53:20.977300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:42.615 [2024-07-15 07:53:20.977321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:42.615 [2024-07-15 07:53:20.977334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:44:42.615 [2024-07-15 07:53:20.977350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:42.615 [2024-07-15 07:53:20.977379] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:44:42.615 [2024-07-15 07:53:20.978349] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:44:42.615 [2024-07-15 07:53:20.978382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:42.615 [2024-07-15 07:53:20.978401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:42.615 [2024-07-15 07:53:20.978414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.009 ms 00:44:42.615 [2024-07-15 07:53:20.978425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:42.615 [2024-07-15 07:53:20.981062] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:44:42.615 [2024-07-15 07:53:20.998082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:42.615 [2024-07-15 07:53:20.998125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:44:42.615 [2024-07-15 07:53:20.998161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.022 ms 00:44:42.615 [2024-07-15 07:53:20.998173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:42.615 [2024-07-15 07:53:20.998263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:42.615 [2024-07-15 07:53:20.998283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:44:42.615 [2024-07-15 07:53:20.998300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:44:42.615 [2024-07-15 07:53:20.998311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:42.615 [2024-07-15 07:53:21.011736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:42.615 [2024-07-15 07:53:21.011785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:42.615 [2024-07-15 07:53:21.011818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.336 ms 00:44:42.615 [2024-07-15 07:53:21.011835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:42.615 [2024-07-15 07:53:21.011962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:42.615 [2024-07-15 07:53:21.011986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:42.615 [2024-07-15 07:53:21.011999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:44:42.616 [2024-07-15 07:53:21.012011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:42.616 [2024-07-15 07:53:21.012098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:42.616 [2024-07-15 07:53:21.012118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:44:42.616 [2024-07-15 07:53:21.012132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:44:42.616 [2024-07-15 07:53:21.012143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:42.616 [2024-07-15 07:53:21.012184] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:44:42.616 [2024-07-15 07:53:21.018062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:42.616 [2024-07-15 07:53:21.018263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:42.616 [2024-07-15 07:53:21.018413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.890 ms 00:44:42.616 [2024-07-15 07:53:21.018569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:42.616 [2024-07-15 07:53:21.018670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:42.616 [2024-07-15 07:53:21.018861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:44:42.616 [2024-07-15 07:53:21.018988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:44:42.616 [2024-07-15 07:53:21.019013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:42.616 [2024-07-15 07:53:21.019072] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:44:42.616 [2024-07-15 07:53:21.019117] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:44:42.616 [2024-07-15 07:53:21.019164] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:44:42.616 [2024-07-15 07:53:21.019191] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:44:42.616 [2024-07-15 07:53:21.019301] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:44:42.616 [2024-07-15 07:53:21.019334] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:44:42.616 [2024-07-15 07:53:21.019350] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:44:42.616 [2024-07-15 07:53:21.019367] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:44:42.616 [2024-07-15 07:53:21.019381] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:44:42.616 [2024-07-15 07:53:21.019395] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:44:42.616 [2024-07-15 07:53:21.019407] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:44:42.616 [2024-07-15 07:53:21.019418] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:44:42.616 [2024-07-15 07:53:21.019430] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:44:42.616 [2024-07-15 07:53:21.019443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:42.616 [2024-07-15 07:53:21.019459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:44:42.616 [2024-07-15 07:53:21.019506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:44:42.616 [2024-07-15 07:53:21.019519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:42.616 [2024-07-15 07:53:21.019631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:42.616 [2024-07-15 07:53:21.019648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:44:42.616 [2024-07-15 07:53:21.019661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:44:42.616 [2024-07-15 07:53:21.019672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:42.616 [2024-07-15 07:53:21.019780] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:44:42.616 [2024-07-15 07:53:21.019798] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:44:42.616 [2024-07-15 07:53:21.019818] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:42.616 [2024-07-15 07:53:21.019846] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:42.616 [2024-07-15 07:53:21.019858] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:44:42.616 [2024-07-15 07:53:21.019869] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:44:42.616 [2024-07-15 07:53:21.019880] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:44:42.616 [2024-07-15 07:53:21.019891] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:44:42.616 [2024-07-15 07:53:21.019902] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:44:42.616 [2024-07-15 07:53:21.019913] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:42.616 [2024-07-15 07:53:21.019924] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:44:42.616 [2024-07-15 07:53:21.019934] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:44:42.616 [2024-07-15 07:53:21.019944] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:42.616 [2024-07-15 07:53:21.019956] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:44:42.616 [2024-07-15 07:53:21.019967] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:44:42.616 [2024-07-15 07:53:21.019977] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:42.616 [2024-07-15 07:53:21.019989] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:44:42.616 [2024-07-15 07:53:21.019999] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:44:42.616 [2024-07-15 07:53:21.020010] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:42.616 [2024-07-15 07:53:21.020020] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:44:42.616 [2024-07-15 07:53:21.020045] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:44:42.616 [2024-07-15 07:53:21.020056] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:42.616 [2024-07-15 07:53:21.020066] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:44:42.616 [2024-07-15 07:53:21.020077] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:44:42.616 [2024-07-15 07:53:21.020087] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:42.616 [2024-07-15 07:53:21.020097] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:44:42.616 [2024-07-15 07:53:21.020107] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:44:42.616 [2024-07-15 07:53:21.020119] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:42.616 [2024-07-15 07:53:21.020130] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:44:42.616 [2024-07-15 07:53:21.020141] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:44:42.616 [2024-07-15 07:53:21.020152] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:42.616 [2024-07-15 07:53:21.020163] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:44:42.616 [2024-07-15 07:53:21.020174] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:44:42.616 [2024-07-15 07:53:21.020184] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:42.616 [2024-07-15 07:53:21.020195] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:44:42.616 [2024-07-15 07:53:21.020205] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:44:42.616 [2024-07-15 07:53:21.020215] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:42.616 [2024-07-15 07:53:21.020226] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:44:42.616 [2024-07-15 07:53:21.020237] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:44:42.616 [2024-07-15 07:53:21.020247] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:42.616 [2024-07-15 07:53:21.020258] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:44:42.616 [2024-07-15 07:53:21.020269] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:44:42.616 [2024-07-15 07:53:21.020280] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:42.616 [2024-07-15 07:53:21.020290] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:44:42.616 [2024-07-15 07:53:21.020302] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:44:42.616 [2024-07-15 07:53:21.020313] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:42.616 [2024-07-15 07:53:21.020325] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:42.616 [2024-07-15 07:53:21.020337] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:44:42.616 [2024-07-15 07:53:21.020348] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:44:42.616 [2024-07-15 07:53:21.020361] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:44:42.616 [2024-07-15 07:53:21.020372] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:44:42.616 [2024-07-15 07:53:21.020382] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:44:42.616 [2024-07-15 07:53:21.020394] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:44:42.616 [2024-07-15 07:53:21.020406] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:44:42.616 [2024-07-15 07:53:21.020421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:42.616 [2024-07-15 07:53:21.020434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:44:42.616 [2024-07-15 07:53:21.020447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:44:42.616 [2024-07-15 07:53:21.020458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:44:42.616 [2024-07-15 07:53:21.020487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:44:42.616 [2024-07-15 07:53:21.020501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:44:42.616 [2024-07-15 07:53:21.020833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:44:42.616 [2024-07-15 07:53:21.020915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:44:42.616 [2024-07-15 07:53:21.020975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:44:42.616 [2024-07-15 07:53:21.021192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:44:42.616 [2024-07-15 07:53:21.021329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:44:42.616 [2024-07-15 07:53:21.021486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:44:42.616 [2024-07-15 07:53:21.021550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:44:42.616 [2024-07-15 07:53:21.021607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:44:42.616 [2024-07-15 07:53:21.021752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:44:42.616 [2024-07-15 07:53:21.021818] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:44:42.616 [2024-07-15 07:53:21.021941] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:42.616 [2024-07-15 07:53:21.022069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:44:42.617 [2024-07-15 07:53:21.022202] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:44:42.617 [2024-07-15 07:53:21.022274] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:44:42.617 [2024-07-15 07:53:21.022424] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:44:42.617 [2024-07-15 07:53:21.022449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:42.617 [2024-07-15 07:53:21.022494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:44:42.617 [2024-07-15 07:53:21.022509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.731 ms 00:44:42.617 [2024-07-15 07:53:21.022521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:42.617 [2024-07-15 07:53:21.081221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:42.617 [2024-07-15 07:53:21.081303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:42.617 [2024-07-15 07:53:21.081342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.597 ms 00:44:42.617 [2024-07-15 07:53:21.081355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:42.617 [2024-07-15 07:53:21.081534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:42.617 [2024-07-15 07:53:21.081555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:44:42.617 [2024-07-15 07:53:21.081569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:44:42.617 [2024-07-15 07:53:21.081581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:42.617 [2024-07-15 07:53:21.127228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:42.617 [2024-07-15 07:53:21.127284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:42.617 [2024-07-15 07:53:21.127320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.523 ms 00:44:42.617 [2024-07-15 07:53:21.127346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:42.617 [2024-07-15 07:53:21.127417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:42.617 [2024-07-15 07:53:21.127434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:42.617 [2024-07-15 07:53:21.127446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:44:42.617 [2024-07-15 07:53:21.127457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:42.617 [2024-07-15 07:53:21.128454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:42.617 [2024-07-15 07:53:21.128517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:42.617 [2024-07-15 07:53:21.128566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.884 ms 00:44:42.617 [2024-07-15 07:53:21.128592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:42.617 [2024-07-15 07:53:21.128783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:42.617 [2024-07-15 07:53:21.128804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:42.617 [2024-07-15 07:53:21.128817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:44:42.617 [2024-07-15 07:53:21.128829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:42.617 [2024-07-15 07:53:21.148988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:42.617 [2024-07-15 07:53:21.149029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:42.617 [2024-07-15 07:53:21.149064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.131 ms 00:44:42.617 [2024-07-15 07:53:21.149075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:42.617 [2024-07-15 07:53:21.165665] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:44:42.617 [2024-07-15 07:53:21.165711] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:44:42.617 [2024-07-15 07:53:21.165730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:42.617 [2024-07-15 07:53:21.165742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:44:42.617 [2024-07-15 07:53:21.165755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.521 ms 00:44:42.617 [2024-07-15 07:53:21.165766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:42.617 [2024-07-15 07:53:21.196125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:42.617 [2024-07-15 07:53:21.196202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:44:42.617 [2024-07-15 07:53:21.196222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.306 ms 00:44:42.617 [2024-07-15 07:53:21.196242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:42.617 [2024-07-15 07:53:21.212520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:42.617 [2024-07-15 07:53:21.212584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:44:42.617 [2024-07-15 07:53:21.212604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.225 ms 00:44:42.617 [2024-07-15 07:53:21.212618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:42.876 [2024-07-15 07:53:21.228549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:42.876 [2024-07-15 07:53:21.228590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:44:42.876 [2024-07-15 07:53:21.228606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.868 ms 00:44:42.876 [2024-07-15 07:53:21.228617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:42.876 [2024-07-15 07:53:21.229658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:42.876 [2024-07-15 07:53:21.229692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:44:42.876 [2024-07-15 07:53:21.229708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.895 ms 00:44:42.876 [2024-07-15 07:53:21.229720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:42.876 [2024-07-15 07:53:21.312015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:42.876 [2024-07-15 07:53:21.312104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:44:42.876 [2024-07-15 07:53:21.312127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.263 ms 00:44:42.876 [2024-07-15 07:53:21.312140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:42.876 [2024-07-15 07:53:21.323995] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:44:42.876 [2024-07-15 07:53:21.327959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:42.876 [2024-07-15 07:53:21.327991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:44:42.876 [2024-07-15 07:53:21.328007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.730 ms 00:44:42.876 [2024-07-15 07:53:21.328018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:42.876 [2024-07-15 07:53:21.328160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:42.876 [2024-07-15 07:53:21.328181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:44:42.876 [2024-07-15 07:53:21.328196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:44:42.876 [2024-07-15 07:53:21.328208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:42.876 [2024-07-15 07:53:21.331661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:42.876 [2024-07-15 07:53:21.331697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:44:42.876 [2024-07-15 07:53:21.331711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.378 ms 00:44:42.876 [2024-07-15 07:53:21.331722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:42.876 [2024-07-15 07:53:21.331761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:42.876 [2024-07-15 07:53:21.331778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:44:42.876 [2024-07-15 07:53:21.331791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:44:42.876 [2024-07-15 07:53:21.331801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:42.876 [2024-07-15 07:53:21.331875] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:44:42.876 [2024-07-15 07:53:21.331909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:42.876 [2024-07-15 07:53:21.331921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:44:42.876 [2024-07-15 07:53:21.331938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:44:42.876 [2024-07-15 07:53:21.331950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:42.876 [2024-07-15 07:53:21.362269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:42.876 [2024-07-15 07:53:21.362305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:44:42.876 [2024-07-15 07:53:21.362322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.289 ms 00:44:42.876 [2024-07-15 07:53:21.362333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:42.876 [2024-07-15 07:53:21.362415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:42.876 [2024-07-15 07:53:21.362443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:44:42.876 [2024-07-15 07:53:21.362484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:44:42.876 [2024-07-15 07:53:21.362530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:42.876 [2024-07-15 07:53:21.370247] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 391.772 ms, result 0 00:45:22.351  Copying: 884/1048576 [kB] (884 kBps) Copying: 4140/1048576 [kB] (3256 kBps) Copying: 21/1024 [MB] (17 MBps) Copying: 49/1024 [MB] (27 MBps) Copying: 77/1024 [MB] (28 MBps) Copying: 104/1024 [MB] (27 MBps) Copying: 132/1024 [MB] (27 MBps) Copying: 159/1024 [MB] (26 MBps) Copying: 186/1024 [MB] (27 MBps) Copying: 214/1024 [MB] (27 MBps) Copying: 241/1024 [MB] (27 MBps) Copying: 269/1024 [MB] (27 MBps) Copying: 297/1024 [MB] (28 MBps) Copying: 326/1024 [MB] (28 MBps) Copying: 355/1024 [MB] (28 MBps) Copying: 385/1024 [MB] (29 MBps) Copying: 413/1024 [MB] (28 MBps) Copying: 442/1024 [MB] (28 MBps) Copying: 469/1024 [MB] (27 MBps) Copying: 498/1024 [MB] (28 MBps) Copying: 527/1024 [MB] (29 MBps) Copying: 554/1024 [MB] (27 MBps) Copying: 582/1024 [MB] (27 MBps) Copying: 611/1024 [MB] (29 MBps) Copying: 639/1024 [MB] (28 MBps) Copying: 667/1024 [MB] (27 MBps) Copying: 695/1024 [MB] (27 MBps) Copying: 722/1024 [MB] (27 MBps) Copying: 750/1024 [MB] (27 MBps) Copying: 778/1024 [MB] (28 MBps) Copying: 805/1024 [MB] (26 MBps) Copying: 832/1024 [MB] (27 MBps) Copying: 859/1024 [MB] (26 MBps) Copying: 887/1024 [MB] (28 MBps) Copying: 915/1024 [MB] (28 MBps) Copying: 944/1024 [MB] (28 MBps) Copying: 973/1024 [MB] (28 MBps) Copying: 1000/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-15 07:54:00.839605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.351 [2024-07-15 07:54:00.839738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:45:22.351 [2024-07-15 07:54:00.839782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:45:22.351 [2024-07-15 07:54:00.839806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.351 [2024-07-15 07:54:00.839864] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:45:22.351 [2024-07-15 07:54:00.845641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.351 [2024-07-15 07:54:00.845701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:45:22.351 [2024-07-15 07:54:00.845726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.742 ms 00:45:22.351 [2024-07-15 07:54:00.845742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.351 [2024-07-15 07:54:00.846101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.351 [2024-07-15 07:54:00.846131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:45:22.351 [2024-07-15 07:54:00.846151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:45:22.351 [2024-07-15 07:54:00.846167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.351 [2024-07-15 07:54:00.860293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.351 [2024-07-15 07:54:00.860417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:45:22.351 [2024-07-15 07:54:00.860445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.079 ms 00:45:22.351 [2024-07-15 07:54:00.860474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.351 [2024-07-15 07:54:00.869165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.351 [2024-07-15 07:54:00.869285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:45:22.351 [2024-07-15 07:54:00.869319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.635 ms 00:45:22.351 [2024-07-15 07:54:00.869341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.351 [2024-07-15 07:54:00.902096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.351 [2024-07-15 07:54:00.902186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:45:22.351 [2024-07-15 07:54:00.902223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.542 ms 00:45:22.351 [2024-07-15 07:54:00.902235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.351 [2024-07-15 07:54:00.919476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.351 [2024-07-15 07:54:00.919585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:45:22.351 [2024-07-15 07:54:00.919622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.179 ms 00:45:22.351 [2024-07-15 07:54:00.919634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.351 [2024-07-15 07:54:00.923541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.351 [2024-07-15 07:54:00.923604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:45:22.351 [2024-07-15 07:54:00.923622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.849 ms 00:45:22.351 [2024-07-15 07:54:00.923636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.351 [2024-07-15 07:54:00.954991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.351 [2024-07-15 07:54:00.955069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:45:22.351 [2024-07-15 07:54:00.955091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.328 ms 00:45:22.351 [2024-07-15 07:54:00.955104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.611 [2024-07-15 07:54:00.985385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.611 [2024-07-15 07:54:00.985485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:45:22.611 [2024-07-15 07:54:00.985525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.222 ms 00:45:22.611 [2024-07-15 07:54:00.985537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.611 [2024-07-15 07:54:01.015255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.611 [2024-07-15 07:54:01.015361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:45:22.611 [2024-07-15 07:54:01.015400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.655 ms 00:45:22.611 [2024-07-15 07:54:01.015433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.611 [2024-07-15 07:54:01.045997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.611 [2024-07-15 07:54:01.046092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:45:22.611 [2024-07-15 07:54:01.046128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.421 ms 00:45:22.611 [2024-07-15 07:54:01.046140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.611 [2024-07-15 07:54:01.046237] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:45:22.611 [2024-07-15 07:54:01.046270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:45:22.611 [2024-07-15 07:54:01.046286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:45:22.611 [2024-07-15 07:54:01.046299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.046991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.047004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.047018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.047031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.047050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.047064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.047077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.047090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.047103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.047116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.047128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.047142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.047169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.047191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.047204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.047217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.047230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.047242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.047255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.047268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.047295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.047308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.047320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:45:22.611 [2024-07-15 07:54:01.047332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:45:22.612 [2024-07-15 07:54:01.047344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:45:22.612 [2024-07-15 07:54:01.047356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:45:22.612 [2024-07-15 07:54:01.047369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:45:22.612 [2024-07-15 07:54:01.047381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:45:22.612 [2024-07-15 07:54:01.047393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:45:22.612 [2024-07-15 07:54:01.047406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:45:22.612 [2024-07-15 07:54:01.047418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:45:22.612 [2024-07-15 07:54:01.047431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:45:22.612 [2024-07-15 07:54:01.047445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:45:22.612 [2024-07-15 07:54:01.047457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:45:22.612 [2024-07-15 07:54:01.047470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:45:22.612 [2024-07-15 07:54:01.047492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:45:22.612 [2024-07-15 07:54:01.047506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:45:22.612 [2024-07-15 07:54:01.047534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:45:22.612 [2024-07-15 07:54:01.047546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:45:22.612 [2024-07-15 07:54:01.047558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:45:22.612 [2024-07-15 07:54:01.047570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:45:22.612 [2024-07-15 07:54:01.047582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:45:22.612 [2024-07-15 07:54:01.047594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:45:22.612 [2024-07-15 07:54:01.047606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:45:22.612 [2024-07-15 07:54:01.047618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:45:22.612 [2024-07-15 07:54:01.047636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:45:22.612 [2024-07-15 07:54:01.047648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:45:22.612 [2024-07-15 07:54:01.047661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:45:22.612 [2024-07-15 07:54:01.047673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:45:22.612 [2024-07-15 07:54:01.047685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:45:22.612 [2024-07-15 07:54:01.047697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:45:22.612 [2024-07-15 07:54:01.047735] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:45:22.612 [2024-07-15 07:54:01.047748] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1f0e24a3-c59b-4e19-8a54-562f5b275761 00:45:22.612 [2024-07-15 07:54:01.047761] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:45:22.612 [2024-07-15 07:54:01.047772] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 137664 00:45:22.612 [2024-07-15 07:54:01.047783] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 135680 00:45:22.612 [2024-07-15 07:54:01.047811] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0146 00:45:22.612 [2024-07-15 07:54:01.047830] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:45:22.612 [2024-07-15 07:54:01.047847] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:45:22.612 [2024-07-15 07:54:01.047859] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:45:22.612 [2024-07-15 07:54:01.047869] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:45:22.612 [2024-07-15 07:54:01.047879] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:45:22.612 [2024-07-15 07:54:01.047891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.612 [2024-07-15 07:54:01.047903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:45:22.612 [2024-07-15 07:54:01.047916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.678 ms 00:45:22.612 [2024-07-15 07:54:01.047927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.612 [2024-07-15 07:54:01.064587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.612 [2024-07-15 07:54:01.064670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:45:22.612 [2024-07-15 07:54:01.064706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.601 ms 00:45:22.612 [2024-07-15 07:54:01.064757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.612 [2024-07-15 07:54:01.065341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.612 [2024-07-15 07:54:01.065373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:45:22.612 [2024-07-15 07:54:01.065389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:45:22.612 [2024-07-15 07:54:01.065402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.612 [2024-07-15 07:54:01.107007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.612 [2024-07-15 07:54:01.107097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:22.612 [2024-07-15 07:54:01.107126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.612 [2024-07-15 07:54:01.107140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.612 [2024-07-15 07:54:01.107250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.612 [2024-07-15 07:54:01.107268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:22.612 [2024-07-15 07:54:01.107281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.612 [2024-07-15 07:54:01.107293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.612 [2024-07-15 07:54:01.107413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.612 [2024-07-15 07:54:01.107436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:22.612 [2024-07-15 07:54:01.107465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.612 [2024-07-15 07:54:01.107487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.612 [2024-07-15 07:54:01.107514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.612 [2024-07-15 07:54:01.107529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:22.612 [2024-07-15 07:54:01.107542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.612 [2024-07-15 07:54:01.107554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.612 [2024-07-15 07:54:01.218141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.612 [2024-07-15 07:54:01.218257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:22.612 [2024-07-15 07:54:01.218303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.612 [2024-07-15 07:54:01.218316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.871 [2024-07-15 07:54:01.303289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.871 [2024-07-15 07:54:01.303425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:22.871 [2024-07-15 07:54:01.303463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.871 [2024-07-15 07:54:01.303490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.871 [2024-07-15 07:54:01.303584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.871 [2024-07-15 07:54:01.303617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:22.871 [2024-07-15 07:54:01.303647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.871 [2024-07-15 07:54:01.303659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.871 [2024-07-15 07:54:01.303718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.871 [2024-07-15 07:54:01.303734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:22.871 [2024-07-15 07:54:01.303763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.871 [2024-07-15 07:54:01.303776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.871 [2024-07-15 07:54:01.303912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.871 [2024-07-15 07:54:01.303932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:22.871 [2024-07-15 07:54:01.303945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.871 [2024-07-15 07:54:01.303957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.871 [2024-07-15 07:54:01.304071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.871 [2024-07-15 07:54:01.304093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:45:22.871 [2024-07-15 07:54:01.304107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.871 [2024-07-15 07:54:01.304118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.871 [2024-07-15 07:54:01.304188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.871 [2024-07-15 07:54:01.304225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:22.871 [2024-07-15 07:54:01.304253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.871 [2024-07-15 07:54:01.304264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.871 [2024-07-15 07:54:01.304332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.871 [2024-07-15 07:54:01.304364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:22.871 [2024-07-15 07:54:01.304378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.871 [2024-07-15 07:54:01.304390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.871 [2024-07-15 07:54:01.304633] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 464.975 ms, result 0 00:45:24.246 00:45:24.246 00:45:24.246 07:54:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:45:26.141 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:45:26.141 07:54:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:45:26.399 [2024-07-15 07:54:04.840348] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:45:26.399 [2024-07-15 07:54:04.840604] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85911 ] 00:45:26.658 [2024-07-15 07:54:05.013555] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:26.916 [2024-07-15 07:54:05.313727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:27.174 [2024-07-15 07:54:05.709552] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:45:27.174 [2024-07-15 07:54:05.709655] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:45:27.435 [2024-07-15 07:54:05.876972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.435 [2024-07-15 07:54:05.877048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:45:27.435 [2024-07-15 07:54:05.877084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:45:27.435 [2024-07-15 07:54:05.877096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.435 [2024-07-15 07:54:05.877164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.435 [2024-07-15 07:54:05.877182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:27.435 [2024-07-15 07:54:05.877195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:45:27.435 [2024-07-15 07:54:05.877242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.435 [2024-07-15 07:54:05.877274] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:45:27.435 [2024-07-15 07:54:05.878198] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:45:27.435 [2024-07-15 07:54:05.878234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.435 [2024-07-15 07:54:05.878252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:27.435 [2024-07-15 07:54:05.878265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.967 ms 00:45:27.435 [2024-07-15 07:54:05.878277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.435 [2024-07-15 07:54:05.881001] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:45:27.435 [2024-07-15 07:54:05.898904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.435 [2024-07-15 07:54:05.898952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:45:27.435 [2024-07-15 07:54:05.898970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.904 ms 00:45:27.435 [2024-07-15 07:54:05.898983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.435 [2024-07-15 07:54:05.899172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.435 [2024-07-15 07:54:05.899195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:45:27.435 [2024-07-15 07:54:05.899221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:45:27.435 [2024-07-15 07:54:05.899234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.435 [2024-07-15 07:54:05.911387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.435 [2024-07-15 07:54:05.911479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:27.435 [2024-07-15 07:54:05.911499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.052 ms 00:45:27.435 [2024-07-15 07:54:05.911512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.435 [2024-07-15 07:54:05.911644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.435 [2024-07-15 07:54:05.911668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:27.435 [2024-07-15 07:54:05.911682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:45:27.435 [2024-07-15 07:54:05.911693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.435 [2024-07-15 07:54:05.911800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.435 [2024-07-15 07:54:05.911818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:45:27.435 [2024-07-15 07:54:05.911832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:45:27.435 [2024-07-15 07:54:05.911843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.435 [2024-07-15 07:54:05.911895] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:45:27.435 [2024-07-15 07:54:05.917840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.435 [2024-07-15 07:54:05.917893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:27.435 [2024-07-15 07:54:05.917924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.969 ms 00:45:27.435 [2024-07-15 07:54:05.917936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.435 [2024-07-15 07:54:05.917986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.435 [2024-07-15 07:54:05.918002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:45:27.435 [2024-07-15 07:54:05.918015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:45:27.435 [2024-07-15 07:54:05.918027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.435 [2024-07-15 07:54:05.918072] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:45:27.435 [2024-07-15 07:54:05.918108] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:45:27.435 [2024-07-15 07:54:05.918154] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:45:27.435 [2024-07-15 07:54:05.918180] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:45:27.435 [2024-07-15 07:54:05.918287] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:45:27.435 [2024-07-15 07:54:05.918303] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:45:27.435 [2024-07-15 07:54:05.918319] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:45:27.435 [2024-07-15 07:54:05.918335] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:45:27.435 [2024-07-15 07:54:05.918350] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:45:27.435 [2024-07-15 07:54:05.918374] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:45:27.435 [2024-07-15 07:54:05.918387] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:45:27.435 [2024-07-15 07:54:05.918398] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:45:27.435 [2024-07-15 07:54:05.918410] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:45:27.435 [2024-07-15 07:54:05.918423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.435 [2024-07-15 07:54:05.918440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:45:27.435 [2024-07-15 07:54:05.918452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:45:27.435 [2024-07-15 07:54:05.918463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.435 [2024-07-15 07:54:05.918569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.435 [2024-07-15 07:54:05.918583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:45:27.435 [2024-07-15 07:54:05.918595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:45:27.435 [2024-07-15 07:54:05.918607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.435 [2024-07-15 07:54:05.918721] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:45:27.435 [2024-07-15 07:54:05.918738] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:45:27.435 [2024-07-15 07:54:05.918757] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:27.435 [2024-07-15 07:54:05.918768] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:27.436 [2024-07-15 07:54:05.918781] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:45:27.436 [2024-07-15 07:54:05.918791] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:45:27.436 [2024-07-15 07:54:05.918803] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:45:27.436 [2024-07-15 07:54:05.918813] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:45:27.436 [2024-07-15 07:54:05.918849] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:45:27.436 [2024-07-15 07:54:05.918861] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:27.436 [2024-07-15 07:54:05.918871] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:45:27.436 [2024-07-15 07:54:05.918882] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:45:27.436 [2024-07-15 07:54:05.918904] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:27.436 [2024-07-15 07:54:05.918914] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:45:27.436 [2024-07-15 07:54:05.918925] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:45:27.436 [2024-07-15 07:54:05.918938] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:27.436 [2024-07-15 07:54:05.918950] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:45:27.436 [2024-07-15 07:54:05.918961] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:45:27.436 [2024-07-15 07:54:05.918973] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:27.436 [2024-07-15 07:54:05.918984] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:45:27.436 [2024-07-15 07:54:05.919008] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:45:27.436 [2024-07-15 07:54:05.919020] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:27.436 [2024-07-15 07:54:05.919031] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:45:27.436 [2024-07-15 07:54:05.919042] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:45:27.436 [2024-07-15 07:54:05.919054] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:27.436 [2024-07-15 07:54:05.919064] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:45:27.436 [2024-07-15 07:54:05.919075] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:45:27.436 [2024-07-15 07:54:05.919086] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:27.436 [2024-07-15 07:54:05.919097] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:45:27.436 [2024-07-15 07:54:05.919109] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:45:27.436 [2024-07-15 07:54:05.919120] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:27.436 [2024-07-15 07:54:05.919131] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:45:27.436 [2024-07-15 07:54:05.919142] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:45:27.436 [2024-07-15 07:54:05.919153] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:27.436 [2024-07-15 07:54:05.919165] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:45:27.436 [2024-07-15 07:54:05.919176] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:45:27.436 [2024-07-15 07:54:05.919187] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:27.436 [2024-07-15 07:54:05.919198] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:45:27.436 [2024-07-15 07:54:05.919210] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:45:27.436 [2024-07-15 07:54:05.919221] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:27.436 [2024-07-15 07:54:05.919232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:45:27.436 [2024-07-15 07:54:05.919243] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:45:27.436 [2024-07-15 07:54:05.919254] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:27.436 [2024-07-15 07:54:05.919264] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:45:27.436 [2024-07-15 07:54:05.919277] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:45:27.436 [2024-07-15 07:54:05.919289] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:27.436 [2024-07-15 07:54:05.919301] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:27.436 [2024-07-15 07:54:05.919316] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:45:27.436 [2024-07-15 07:54:05.919327] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:45:27.436 [2024-07-15 07:54:05.919339] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:45:27.436 [2024-07-15 07:54:05.919351] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:45:27.436 [2024-07-15 07:54:05.919361] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:45:27.436 [2024-07-15 07:54:05.919373] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:45:27.436 [2024-07-15 07:54:05.919386] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:45:27.436 [2024-07-15 07:54:05.919401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:27.436 [2024-07-15 07:54:05.919418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:45:27.436 [2024-07-15 07:54:05.919430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:45:27.436 [2024-07-15 07:54:05.919442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:45:27.436 [2024-07-15 07:54:05.919468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:45:27.436 [2024-07-15 07:54:05.919483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:45:27.436 [2024-07-15 07:54:05.919495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:45:27.436 [2024-07-15 07:54:05.919507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:45:27.436 [2024-07-15 07:54:05.919519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:45:27.436 [2024-07-15 07:54:05.919531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:45:27.436 [2024-07-15 07:54:05.919543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:45:27.436 [2024-07-15 07:54:05.919555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:45:27.436 [2024-07-15 07:54:05.919566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:45:27.436 [2024-07-15 07:54:05.919578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:45:27.436 [2024-07-15 07:54:05.919590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:45:27.436 [2024-07-15 07:54:05.919602] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:45:27.436 [2024-07-15 07:54:05.919617] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:27.436 [2024-07-15 07:54:05.919630] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:45:27.436 [2024-07-15 07:54:05.919644] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:45:27.436 [2024-07-15 07:54:05.919657] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:45:27.436 [2024-07-15 07:54:05.919669] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:45:27.436 [2024-07-15 07:54:05.919682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.436 [2024-07-15 07:54:05.919700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:45:27.436 [2024-07-15 07:54:05.919713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.026 ms 00:45:27.436 [2024-07-15 07:54:05.919725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.436 [2024-07-15 07:54:05.979474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.436 [2024-07-15 07:54:05.980579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:27.436 [2024-07-15 07:54:05.980609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.660 ms 00:45:27.436 [2024-07-15 07:54:05.980623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.436 [2024-07-15 07:54:05.980767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.436 [2024-07-15 07:54:05.980785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:45:27.436 [2024-07-15 07:54:05.980799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:45:27.436 [2024-07-15 07:54:05.980811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.436 [2024-07-15 07:54:06.027856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.436 [2024-07-15 07:54:06.027935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:27.436 [2024-07-15 07:54:06.027972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.930 ms 00:45:27.436 [2024-07-15 07:54:06.027985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.436 [2024-07-15 07:54:06.028060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.436 [2024-07-15 07:54:06.028076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:27.436 [2024-07-15 07:54:06.028089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:45:27.436 [2024-07-15 07:54:06.028101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.436 [2024-07-15 07:54:06.029014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.436 [2024-07-15 07:54:06.029062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:27.436 [2024-07-15 07:54:06.029093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.807 ms 00:45:27.436 [2024-07-15 07:54:06.029104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.436 [2024-07-15 07:54:06.029305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.436 [2024-07-15 07:54:06.029323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:27.436 [2024-07-15 07:54:06.029336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:45:27.436 [2024-07-15 07:54:06.029348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.696 [2024-07-15 07:54:06.050740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.696 [2024-07-15 07:54:06.050782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:27.696 [2024-07-15 07:54:06.050799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.364 ms 00:45:27.696 [2024-07-15 07:54:06.050812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.696 [2024-07-15 07:54:06.069211] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:45:27.696 [2024-07-15 07:54:06.069271] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:45:27.696 [2024-07-15 07:54:06.069305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.696 [2024-07-15 07:54:06.069317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:45:27.696 [2024-07-15 07:54:06.069331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.346 ms 00:45:27.696 [2024-07-15 07:54:06.069342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.696 [2024-07-15 07:54:06.097744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.696 [2024-07-15 07:54:06.097798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:45:27.696 [2024-07-15 07:54:06.097847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.348 ms 00:45:27.696 [2024-07-15 07:54:06.097866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.696 [2024-07-15 07:54:06.112376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.696 [2024-07-15 07:54:06.112430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:45:27.696 [2024-07-15 07:54:06.112461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.464 ms 00:45:27.696 [2024-07-15 07:54:06.112501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.696 [2024-07-15 07:54:06.126613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.696 [2024-07-15 07:54:06.126666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:45:27.696 [2024-07-15 07:54:06.126698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.069 ms 00:45:27.696 [2024-07-15 07:54:06.126709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.696 [2024-07-15 07:54:06.127623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.696 [2024-07-15 07:54:06.127686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:45:27.696 [2024-07-15 07:54:06.127718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.743 ms 00:45:27.696 [2024-07-15 07:54:06.127729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.696 [2024-07-15 07:54:06.214030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.696 [2024-07-15 07:54:06.214135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:45:27.696 [2024-07-15 07:54:06.214174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.266 ms 00:45:27.696 [2024-07-15 07:54:06.214204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.696 [2024-07-15 07:54:06.225883] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:45:27.696 [2024-07-15 07:54:06.229582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.696 [2024-07-15 07:54:06.229630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:45:27.696 [2024-07-15 07:54:06.229678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.279 ms 00:45:27.696 [2024-07-15 07:54:06.229690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.696 [2024-07-15 07:54:06.229793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.696 [2024-07-15 07:54:06.229813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:45:27.696 [2024-07-15 07:54:06.229827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:45:27.696 [2024-07-15 07:54:06.229838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.696 [2024-07-15 07:54:06.231408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.696 [2024-07-15 07:54:06.231486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:45:27.696 [2024-07-15 07:54:06.231502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.501 ms 00:45:27.696 [2024-07-15 07:54:06.231514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.696 [2024-07-15 07:54:06.231553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.696 [2024-07-15 07:54:06.231569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:45:27.696 [2024-07-15 07:54:06.231582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:45:27.696 [2024-07-15 07:54:06.231593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.696 [2024-07-15 07:54:06.231636] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:45:27.696 [2024-07-15 07:54:06.231654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.696 [2024-07-15 07:54:06.231666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:45:27.696 [2024-07-15 07:54:06.231683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:45:27.696 [2024-07-15 07:54:06.231694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.696 [2024-07-15 07:54:06.263023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.696 [2024-07-15 07:54:06.263067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:45:27.696 [2024-07-15 07:54:06.263085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.288 ms 00:45:27.696 [2024-07-15 07:54:06.263098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.696 [2024-07-15 07:54:06.263183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:27.696 [2024-07-15 07:54:06.263229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:45:27.696 [2024-07-15 07:54:06.263242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:45:27.696 [2024-07-15 07:54:06.263253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:27.696 [2024-07-15 07:54:06.265015] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 387.398 ms, result 0 00:46:08.930  Copying: 26/1024 [MB] (26 MBps) Copying: 52/1024 [MB] (26 MBps) Copying: 79/1024 [MB] (26 MBps) Copying: 105/1024 [MB] (26 MBps) Copying: 131/1024 [MB] (26 MBps) Copying: 157/1024 [MB] (26 MBps) Copying: 183/1024 [MB] (26 MBps) Copying: 209/1024 [MB] (25 MBps) Copying: 236/1024 [MB] (26 MBps) Copying: 260/1024 [MB] (24 MBps) Copying: 285/1024 [MB] (24 MBps) Copying: 310/1024 [MB] (24 MBps) Copying: 335/1024 [MB] (24 MBps) Copying: 359/1024 [MB] (24 MBps) Copying: 384/1024 [MB] (24 MBps) Copying: 408/1024 [MB] (24 MBps) Copying: 433/1024 [MB] (25 MBps) Copying: 459/1024 [MB] (25 MBps) Copying: 483/1024 [MB] (24 MBps) Copying: 508/1024 [MB] (24 MBps) Copying: 532/1024 [MB] (23 MBps) Copying: 557/1024 [MB] (25 MBps) Copying: 583/1024 [MB] (25 MBps) Copying: 607/1024 [MB] (24 MBps) Copying: 631/1024 [MB] (24 MBps) Copying: 656/1024 [MB] (24 MBps) Copying: 680/1024 [MB] (24 MBps) Copying: 705/1024 [MB] (24 MBps) Copying: 729/1024 [MB] (24 MBps) Copying: 754/1024 [MB] (25 MBps) Copying: 779/1024 [MB] (24 MBps) Copying: 804/1024 [MB] (24 MBps) Copying: 829/1024 [MB] (25 MBps) Copying: 855/1024 [MB] (25 MBps) Copying: 880/1024 [MB] (24 MBps) Copying: 904/1024 [MB] (24 MBps) Copying: 929/1024 [MB] (25 MBps) Copying: 954/1024 [MB] (25 MBps) Copying: 979/1024 [MB] (24 MBps) Copying: 1003/1024 [MB] (24 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-15 07:54:47.306178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:08.930 [2024-07-15 07:54:47.306289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:46:08.930 [2024-07-15 07:54:47.306315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:46:08.930 [2024-07-15 07:54:47.306330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.930 [2024-07-15 07:54:47.306366] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:46:08.930 [2024-07-15 07:54:47.311427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:08.930 [2024-07-15 07:54:47.311496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:46:08.930 [2024-07-15 07:54:47.311516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.036 ms 00:46:08.930 [2024-07-15 07:54:47.311529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.930 [2024-07-15 07:54:47.312355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:08.930 [2024-07-15 07:54:47.312392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:46:08.930 [2024-07-15 07:54:47.312409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:46:08.930 [2024-07-15 07:54:47.312423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.930 [2024-07-15 07:54:47.315982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:08.930 [2024-07-15 07:54:47.316026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:46:08.930 [2024-07-15 07:54:47.316057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.537 ms 00:46:08.930 [2024-07-15 07:54:47.316068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.930 [2024-07-15 07:54:47.322327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:08.930 [2024-07-15 07:54:47.322372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:46:08.930 [2024-07-15 07:54:47.322409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.238 ms 00:46:08.930 [2024-07-15 07:54:47.322420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.930 [2024-07-15 07:54:47.353577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:08.930 [2024-07-15 07:54:47.353677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:46:08.930 [2024-07-15 07:54:47.353714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.047 ms 00:46:08.930 [2024-07-15 07:54:47.353742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.930 [2024-07-15 07:54:47.371114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:08.930 [2024-07-15 07:54:47.371191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:46:08.930 [2024-07-15 07:54:47.371236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.307 ms 00:46:08.930 [2024-07-15 07:54:47.371248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.930 [2024-07-15 07:54:47.375575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:08.930 [2024-07-15 07:54:47.375635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:46:08.930 [2024-07-15 07:54:47.375684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.295 ms 00:46:08.930 [2024-07-15 07:54:47.375706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.930 [2024-07-15 07:54:47.404559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:08.930 [2024-07-15 07:54:47.404631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:46:08.930 [2024-07-15 07:54:47.404666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.829 ms 00:46:08.930 [2024-07-15 07:54:47.404678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.930 [2024-07-15 07:54:47.436310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:08.930 [2024-07-15 07:54:47.436384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:46:08.930 [2024-07-15 07:54:47.436421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.586 ms 00:46:08.930 [2024-07-15 07:54:47.436434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.930 [2024-07-15 07:54:47.468134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:08.930 [2024-07-15 07:54:47.468241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:46:08.930 [2024-07-15 07:54:47.468306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.586 ms 00:46:08.930 [2024-07-15 07:54:47.468319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.930 [2024-07-15 07:54:47.499931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:08.930 [2024-07-15 07:54:47.499996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:46:08.931 [2024-07-15 07:54:47.500033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.362 ms 00:46:08.931 [2024-07-15 07:54:47.500045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.931 [2024-07-15 07:54:47.500106] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:46:08.931 [2024-07-15 07:54:47.500132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:46:08.931 [2024-07-15 07:54:47.500149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:46:08.931 [2024-07-15 07:54:47.500161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.500997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.501011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.501025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.501038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.501051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.501063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.501076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.501089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.501103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.501115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.501128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.501140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.501152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.501164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.501177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.501189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.501204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.501217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.501230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.501243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.501256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.501268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.501281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.501294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.501307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.501319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.501331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.501344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.501357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.501370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:46:08.931 [2024-07-15 07:54:47.501383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:46:08.932 [2024-07-15 07:54:47.501396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:46:08.932 [2024-07-15 07:54:47.501409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:46:08.932 [2024-07-15 07:54:47.501422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:46:08.932 [2024-07-15 07:54:47.501435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:46:08.932 [2024-07-15 07:54:47.501448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:46:08.932 [2024-07-15 07:54:47.501482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:46:08.932 [2024-07-15 07:54:47.501506] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:46:08.932 [2024-07-15 07:54:47.501519] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1f0e24a3-c59b-4e19-8a54-562f5b275761 00:46:08.932 [2024-07-15 07:54:47.501533] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:46:08.932 [2024-07-15 07:54:47.501546] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:46:08.932 [2024-07-15 07:54:47.501566] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:46:08.932 [2024-07-15 07:54:47.501579] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:46:08.932 [2024-07-15 07:54:47.501591] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:46:08.932 [2024-07-15 07:54:47.501604] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:46:08.932 [2024-07-15 07:54:47.501617] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:46:08.932 [2024-07-15 07:54:47.501628] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:46:08.932 [2024-07-15 07:54:47.501639] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:46:08.932 [2024-07-15 07:54:47.501651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:08.932 [2024-07-15 07:54:47.501664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:46:08.932 [2024-07-15 07:54:47.501677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.563 ms 00:46:08.932 [2024-07-15 07:54:47.501689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.932 [2024-07-15 07:54:47.520000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:08.932 [2024-07-15 07:54:47.520125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:46:08.932 [2024-07-15 07:54:47.520178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.259 ms 00:46:08.932 [2024-07-15 07:54:47.520191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:08.932 [2024-07-15 07:54:47.520805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:08.932 [2024-07-15 07:54:47.520836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:46:08.932 [2024-07-15 07:54:47.520852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:46:08.932 [2024-07-15 07:54:47.520865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.190 [2024-07-15 07:54:47.563671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:09.190 [2024-07-15 07:54:47.563767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:46:09.190 [2024-07-15 07:54:47.563804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:09.190 [2024-07-15 07:54:47.563817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.190 [2024-07-15 07:54:47.563940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:09.190 [2024-07-15 07:54:47.563958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:46:09.190 [2024-07-15 07:54:47.563971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:09.190 [2024-07-15 07:54:47.563982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.191 [2024-07-15 07:54:47.564126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:09.191 [2024-07-15 07:54:47.564147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:46:09.191 [2024-07-15 07:54:47.564161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:09.191 [2024-07-15 07:54:47.564173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.191 [2024-07-15 07:54:47.564198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:09.191 [2024-07-15 07:54:47.564212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:46:09.191 [2024-07-15 07:54:47.564225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:09.191 [2024-07-15 07:54:47.564236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.191 [2024-07-15 07:54:47.680229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:09.191 [2024-07-15 07:54:47.680334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:46:09.191 [2024-07-15 07:54:47.680371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:09.191 [2024-07-15 07:54:47.680384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.191 [2024-07-15 07:54:47.774589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:09.191 [2024-07-15 07:54:47.774693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:46:09.191 [2024-07-15 07:54:47.774730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:09.191 [2024-07-15 07:54:47.774744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.191 [2024-07-15 07:54:47.774895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:09.191 [2024-07-15 07:54:47.774924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:46:09.191 [2024-07-15 07:54:47.774939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:09.191 [2024-07-15 07:54:47.774951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.191 [2024-07-15 07:54:47.775002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:09.191 [2024-07-15 07:54:47.775019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:46:09.191 [2024-07-15 07:54:47.775032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:09.191 [2024-07-15 07:54:47.775044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.191 [2024-07-15 07:54:47.775203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:09.191 [2024-07-15 07:54:47.775231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:46:09.191 [2024-07-15 07:54:47.775245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:09.191 [2024-07-15 07:54:47.775256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.191 [2024-07-15 07:54:47.775311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:09.191 [2024-07-15 07:54:47.775331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:46:09.191 [2024-07-15 07:54:47.775345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:09.191 [2024-07-15 07:54:47.775357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.191 [2024-07-15 07:54:47.775414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:09.191 [2024-07-15 07:54:47.775429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:46:09.191 [2024-07-15 07:54:47.775474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:09.191 [2024-07-15 07:54:47.775492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.191 [2024-07-15 07:54:47.775554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:09.191 [2024-07-15 07:54:47.775572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:46:09.191 [2024-07-15 07:54:47.775586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:09.191 [2024-07-15 07:54:47.775598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:09.191 [2024-07-15 07:54:47.775780] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 469.563 ms, result 0 00:46:10.566 00:46:10.566 00:46:10.566 07:54:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:46:13.146 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:46:13.146 07:54:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:46:13.146 07:54:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:46:13.146 07:54:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:46:13.146 07:54:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:46:13.146 07:54:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:46:13.146 07:54:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:46:13.146 07:54:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:46:13.146 Process with pid 83987 is not found 00:46:13.146 07:54:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 83987 00:46:13.146 07:54:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@948 -- # '[' -z 83987 ']' 00:46:13.146 07:54:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # kill -0 83987 00:46:13.146 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (83987) - No such process 00:46:13.146 07:54:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@975 -- # echo 'Process with pid 83987 is not found' 00:46:13.146 07:54:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:46:13.405 Remove shared memory files 00:46:13.405 07:54:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:46:13.405 07:54:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:46:13.405 07:54:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:46:13.405 07:54:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:46:13.405 07:54:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:46:13.405 07:54:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:46:13.405 07:54:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:46:13.405 00:46:13.405 real 3m57.877s 00:46:13.405 user 4m33.759s 00:46:13.405 sys 0m38.676s 00:46:13.405 07:54:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:46:13.405 07:54:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:46:13.405 ************************************ 00:46:13.405 END TEST ftl_dirty_shutdown 00:46:13.405 ************************************ 00:46:13.405 07:54:51 ftl -- common/autotest_common.sh@1142 -- # return 0 00:46:13.405 07:54:51 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:46:13.405 07:54:51 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:46:13.405 07:54:51 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:13.405 07:54:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:46:13.405 ************************************ 00:46:13.405 START TEST ftl_upgrade_shutdown 00:46:13.405 ************************************ 00:46:13.405 07:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:46:13.405 * Looking for test storage... 00:46:13.405 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:46:13.405 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:46:13.405 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:46:13.405 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:46:13.405 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:46:13.405 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:46:13.405 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:46:13.405 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:46:13.405 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:46:13.405 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:46:13.405 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:46:13.405 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:46:13.405 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:46:13.405 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:46:13.405 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=86423 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 86423 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 86423 ']' 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:46:13.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:46:13.406 07:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:46:13.665 [2024-07-15 07:54:52.081050] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:46:13.665 [2024-07-15 07:54:52.081277] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86423 ] 00:46:13.665 [2024-07-15 07:54:52.264240] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:14.232 [2024-07-15 07:54:52.590122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:46:15.167 07:54:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:46:15.167 07:54:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:46:15.167 07:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:46:15.167 07:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:46:15.167 07:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:46:15.167 07:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:46:15.167 07:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:46:15.167 07:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:46:15.167 07:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:46:15.168 07:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:46:15.168 07:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:46:15.168 07:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:46:15.168 07:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:46:15.168 07:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:46:15.168 07:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:46:15.168 07:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:46:15.168 07:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:46:15.168 07:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:46:15.168 07:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:46:15.168 07:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:46:15.168 07:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:46:15.168 07:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:46:15.168 07:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:46:15.426 07:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:46:15.426 07:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:46:15.426 07:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:46:15.426 07:54:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:46:15.426 07:54:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:46:15.426 07:54:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:46:15.426 07:54:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:46:15.426 07:54:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:46:15.685 07:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:46:15.685 { 00:46:15.685 "name": "basen1", 00:46:15.685 "aliases": [ 00:46:15.685 "4d8c7c8b-62b5-41a9-ac27-6fa2a3d788f8" 00:46:15.685 ], 00:46:15.685 "product_name": "NVMe disk", 00:46:15.685 "block_size": 4096, 00:46:15.685 "num_blocks": 1310720, 00:46:15.685 "uuid": "4d8c7c8b-62b5-41a9-ac27-6fa2a3d788f8", 00:46:15.685 "assigned_rate_limits": { 00:46:15.685 "rw_ios_per_sec": 0, 00:46:15.685 "rw_mbytes_per_sec": 0, 00:46:15.685 "r_mbytes_per_sec": 0, 00:46:15.685 "w_mbytes_per_sec": 0 00:46:15.685 }, 00:46:15.685 "claimed": true, 00:46:15.685 "claim_type": "read_many_write_one", 00:46:15.685 "zoned": false, 00:46:15.685 "supported_io_types": { 00:46:15.685 "read": true, 00:46:15.685 "write": true, 00:46:15.685 "unmap": true, 00:46:15.685 "flush": true, 00:46:15.685 "reset": true, 00:46:15.685 "nvme_admin": true, 00:46:15.685 "nvme_io": true, 00:46:15.685 "nvme_io_md": false, 00:46:15.685 "write_zeroes": true, 00:46:15.685 "zcopy": false, 00:46:15.685 "get_zone_info": false, 00:46:15.685 "zone_management": false, 00:46:15.685 "zone_append": false, 00:46:15.685 "compare": true, 00:46:15.685 "compare_and_write": false, 00:46:15.685 "abort": true, 00:46:15.685 "seek_hole": false, 00:46:15.685 "seek_data": false, 00:46:15.685 "copy": true, 00:46:15.685 "nvme_iov_md": false 00:46:15.685 }, 00:46:15.685 "driver_specific": { 00:46:15.685 "nvme": [ 00:46:15.685 { 00:46:15.685 "pci_address": "0000:00:11.0", 00:46:15.685 "trid": { 00:46:15.685 "trtype": "PCIe", 00:46:15.685 "traddr": "0000:00:11.0" 00:46:15.685 }, 00:46:15.685 "ctrlr_data": { 00:46:15.685 "cntlid": 0, 00:46:15.685 "vendor_id": "0x1b36", 00:46:15.685 "model_number": "QEMU NVMe Ctrl", 00:46:15.685 "serial_number": "12341", 00:46:15.685 "firmware_revision": "8.0.0", 00:46:15.685 "subnqn": "nqn.2019-08.org.qemu:12341", 00:46:15.685 "oacs": { 00:46:15.685 "security": 0, 00:46:15.685 "format": 1, 00:46:15.685 "firmware": 0, 00:46:15.685 "ns_manage": 1 00:46:15.685 }, 00:46:15.685 "multi_ctrlr": false, 00:46:15.685 "ana_reporting": false 00:46:15.685 }, 00:46:15.685 "vs": { 00:46:15.685 "nvme_version": "1.4" 00:46:15.685 }, 00:46:15.685 "ns_data": { 00:46:15.685 "id": 1, 00:46:15.685 "can_share": false 00:46:15.685 } 00:46:15.685 } 00:46:15.685 ], 00:46:15.685 "mp_policy": "active_passive" 00:46:15.685 } 00:46:15.685 } 00:46:15.685 ]' 00:46:15.685 07:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:46:15.685 07:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:46:15.685 07:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:46:15.944 07:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:46:15.944 07:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:46:15.944 07:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:46:15.944 07:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:46:15.944 07:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:46:15.944 07:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:46:15.944 07:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:46:15.944 07:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:46:16.204 07:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=f1fa679b-4ccd-4839-916a-0104a4ff10e7 00:46:16.204 07:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:46:16.204 07:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f1fa679b-4ccd-4839-916a-0104a4ff10e7 00:46:16.461 07:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:46:16.719 07:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=6279c36d-591c-4bf9-afdd-3bb099b25747 00:46:16.719 07:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 6279c36d-591c-4bf9-afdd-3bb099b25747 00:46:16.977 07:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=5d8ae1a9-47d7-46cc-9577-d9e4c6c40cd4 00:46:16.977 07:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 5d8ae1a9-47d7-46cc-9577-d9e4c6c40cd4 ]] 00:46:16.977 07:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 5d8ae1a9-47d7-46cc-9577-d9e4c6c40cd4 5120 00:46:16.977 07:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:46:16.977 07:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:46:16.977 07:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=5d8ae1a9-47d7-46cc-9577-d9e4c6c40cd4 00:46:16.977 07:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:46:16.977 07:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 5d8ae1a9-47d7-46cc-9577-d9e4c6c40cd4 00:46:16.977 07:54:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=5d8ae1a9-47d7-46cc-9577-d9e4c6c40cd4 00:46:16.977 07:54:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:46:16.977 07:54:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:46:16.977 07:54:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:46:16.977 07:54:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5d8ae1a9-47d7-46cc-9577-d9e4c6c40cd4 00:46:17.236 07:54:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:46:17.236 { 00:46:17.236 "name": "5d8ae1a9-47d7-46cc-9577-d9e4c6c40cd4", 00:46:17.236 "aliases": [ 00:46:17.236 "lvs/basen1p0" 00:46:17.236 ], 00:46:17.236 "product_name": "Logical Volume", 00:46:17.236 "block_size": 4096, 00:46:17.236 "num_blocks": 5242880, 00:46:17.236 "uuid": "5d8ae1a9-47d7-46cc-9577-d9e4c6c40cd4", 00:46:17.236 "assigned_rate_limits": { 00:46:17.236 "rw_ios_per_sec": 0, 00:46:17.236 "rw_mbytes_per_sec": 0, 00:46:17.236 "r_mbytes_per_sec": 0, 00:46:17.236 "w_mbytes_per_sec": 0 00:46:17.236 }, 00:46:17.236 "claimed": false, 00:46:17.236 "zoned": false, 00:46:17.236 "supported_io_types": { 00:46:17.236 "read": true, 00:46:17.236 "write": true, 00:46:17.236 "unmap": true, 00:46:17.236 "flush": false, 00:46:17.236 "reset": true, 00:46:17.236 "nvme_admin": false, 00:46:17.236 "nvme_io": false, 00:46:17.236 "nvme_io_md": false, 00:46:17.236 "write_zeroes": true, 00:46:17.236 "zcopy": false, 00:46:17.236 "get_zone_info": false, 00:46:17.236 "zone_management": false, 00:46:17.236 "zone_append": false, 00:46:17.236 "compare": false, 00:46:17.236 "compare_and_write": false, 00:46:17.236 "abort": false, 00:46:17.236 "seek_hole": true, 00:46:17.236 "seek_data": true, 00:46:17.236 "copy": false, 00:46:17.236 "nvme_iov_md": false 00:46:17.236 }, 00:46:17.236 "driver_specific": { 00:46:17.236 "lvol": { 00:46:17.236 "lvol_store_uuid": "6279c36d-591c-4bf9-afdd-3bb099b25747", 00:46:17.236 "base_bdev": "basen1", 00:46:17.236 "thin_provision": true, 00:46:17.236 "num_allocated_clusters": 0, 00:46:17.236 "snapshot": false, 00:46:17.236 "clone": false, 00:46:17.236 "esnap_clone": false 00:46:17.236 } 00:46:17.236 } 00:46:17.236 } 00:46:17.236 ]' 00:46:17.236 07:54:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:46:17.236 07:54:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:46:17.236 07:54:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:46:17.236 07:54:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:46:17.236 07:54:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:46:17.236 07:54:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:46:17.236 07:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:46:17.236 07:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:46:17.236 07:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:46:17.495 07:54:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:46:17.495 07:54:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:46:17.495 07:54:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:46:17.753 07:54:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:46:17.753 07:54:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:46:17.753 07:54:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 5d8ae1a9-47d7-46cc-9577-d9e4c6c40cd4 -c cachen1p0 --l2p_dram_limit 2 00:46:18.013 [2024-07-15 07:54:56.592214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:18.013 [2024-07-15 07:54:56.592352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:46:18.013 [2024-07-15 07:54:56.592377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:46:18.013 [2024-07-15 07:54:56.592394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:18.013 [2024-07-15 07:54:56.592506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:18.013 [2024-07-15 07:54:56.592528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:46:18.013 [2024-07-15 07:54:56.592543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.081 ms 00:46:18.013 [2024-07-15 07:54:56.592558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:18.013 [2024-07-15 07:54:56.592601] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:46:18.013 [2024-07-15 07:54:56.593762] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:46:18.013 [2024-07-15 07:54:56.593798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:18.013 [2024-07-15 07:54:56.593834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:46:18.013 [2024-07-15 07:54:56.593847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.204 ms 00:46:18.013 [2024-07-15 07:54:56.593861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:18.013 [2024-07-15 07:54:56.593998] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 0d34625b-7442-4507-94cc-90222daa87ce 00:46:18.013 [2024-07-15 07:54:56.596607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:18.013 [2024-07-15 07:54:56.596653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:46:18.013 [2024-07-15 07:54:56.596677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:46:18.013 [2024-07-15 07:54:56.596690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:18.013 [2024-07-15 07:54:56.610614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:18.013 [2024-07-15 07:54:56.610669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:46:18.013 [2024-07-15 07:54:56.610695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.801 ms 00:46:18.013 [2024-07-15 07:54:56.610708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:18.013 [2024-07-15 07:54:56.610790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:18.013 [2024-07-15 07:54:56.610808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:46:18.013 [2024-07-15 07:54:56.610825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:46:18.013 [2024-07-15 07:54:56.610849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:18.013 [2024-07-15 07:54:56.610966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:18.013 [2024-07-15 07:54:56.610985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:46:18.013 [2024-07-15 07:54:56.611001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:46:18.013 [2024-07-15 07:54:56.611016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:18.013 [2024-07-15 07:54:56.611057] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:46:18.013 [2024-07-15 07:54:56.617042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:18.013 [2024-07-15 07:54:56.617089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:46:18.014 [2024-07-15 07:54:56.617105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.000 ms 00:46:18.014 [2024-07-15 07:54:56.617120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:18.014 [2024-07-15 07:54:56.617161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:18.014 [2024-07-15 07:54:56.617180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:46:18.014 [2024-07-15 07:54:56.617194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:46:18.014 [2024-07-15 07:54:56.617208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:18.014 [2024-07-15 07:54:56.617258] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:46:18.014 [2024-07-15 07:54:56.617438] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:46:18.014 [2024-07-15 07:54:56.617500] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:46:18.014 [2024-07-15 07:54:56.617525] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:46:18.014 [2024-07-15 07:54:56.617541] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:46:18.014 [2024-07-15 07:54:56.617558] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:46:18.014 [2024-07-15 07:54:56.617571] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:46:18.014 [2024-07-15 07:54:56.617586] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:46:18.014 [2024-07-15 07:54:56.617604] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:46:18.014 [2024-07-15 07:54:56.617618] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:46:18.014 [2024-07-15 07:54:56.617630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:18.014 [2024-07-15 07:54:56.617644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:46:18.014 [2024-07-15 07:54:56.617657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.375 ms 00:46:18.014 [2024-07-15 07:54:56.617671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:18.014 [2024-07-15 07:54:56.617773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:18.014 [2024-07-15 07:54:56.617797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:46:18.014 [2024-07-15 07:54:56.617820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:46:18.014 [2024-07-15 07:54:56.617843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:18.014 [2024-07-15 07:54:56.617977] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:46:18.014 [2024-07-15 07:54:56.618012] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:46:18.014 [2024-07-15 07:54:56.618026] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:46:18.014 [2024-07-15 07:54:56.618041] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:46:18.014 [2024-07-15 07:54:56.618053] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:46:18.014 [2024-07-15 07:54:56.618071] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:46:18.014 [2024-07-15 07:54:56.618098] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:46:18.014 [2024-07-15 07:54:56.618113] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:46:18.014 [2024-07-15 07:54:56.618131] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:46:18.014 [2024-07-15 07:54:56.618154] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:46:18.014 [2024-07-15 07:54:56.618171] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:46:18.014 [2024-07-15 07:54:56.618196] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:46:18.014 [2024-07-15 07:54:56.618210] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:46:18.014 [2024-07-15 07:54:56.618224] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:46:18.014 [2024-07-15 07:54:56.618235] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:46:18.014 [2024-07-15 07:54:56.618248] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:46:18.014 [2024-07-15 07:54:56.618258] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:46:18.014 [2024-07-15 07:54:56.618275] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:46:18.014 [2024-07-15 07:54:56.618285] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:46:18.014 [2024-07-15 07:54:56.618306] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:46:18.014 [2024-07-15 07:54:56.618323] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:46:18.014 [2024-07-15 07:54:56.618341] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:46:18.014 [2024-07-15 07:54:56.618359] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:46:18.014 [2024-07-15 07:54:56.618373] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:46:18.014 [2024-07-15 07:54:56.618384] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:46:18.014 [2024-07-15 07:54:56.618398] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:46:18.014 [2024-07-15 07:54:56.618409] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:46:18.014 [2024-07-15 07:54:56.618422] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:46:18.014 [2024-07-15 07:54:56.618433] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:46:18.014 [2024-07-15 07:54:56.618447] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:46:18.014 [2024-07-15 07:54:56.618474] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:46:18.014 [2024-07-15 07:54:56.618489] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:46:18.014 [2024-07-15 07:54:56.618500] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:46:18.014 [2024-07-15 07:54:56.618517] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:46:18.014 [2024-07-15 07:54:56.618528] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:46:18.014 [2024-07-15 07:54:56.618541] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:46:18.014 [2024-07-15 07:54:56.618552] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:46:18.014 [2024-07-15 07:54:56.618568] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:46:18.014 [2024-07-15 07:54:56.618580] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:46:18.014 [2024-07-15 07:54:56.618593] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:46:18.014 [2024-07-15 07:54:56.618604] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:46:18.014 [2024-07-15 07:54:56.618618] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:46:18.014 [2024-07-15 07:54:56.618628] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:46:18.014 [2024-07-15 07:54:56.618642] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:46:18.014 [2024-07-15 07:54:56.618654] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:46:18.014 [2024-07-15 07:54:56.618668] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:46:18.014 [2024-07-15 07:54:56.618680] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:46:18.014 [2024-07-15 07:54:56.618694] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:46:18.014 [2024-07-15 07:54:56.618705] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:46:18.014 [2024-07-15 07:54:56.618722] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:46:18.014 [2024-07-15 07:54:56.618733] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:46:18.014 [2024-07-15 07:54:56.618747] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:46:18.014 [2024-07-15 07:54:56.618759] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:46:18.014 [2024-07-15 07:54:56.618780] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:46:18.014 [2024-07-15 07:54:56.618796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:18.014 [2024-07-15 07:54:56.618815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:46:18.014 [2024-07-15 07:54:56.618828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:46:18.014 [2024-07-15 07:54:56.618866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:46:18.014 [2024-07-15 07:54:56.618880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:46:18.014 [2024-07-15 07:54:56.618895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:46:18.014 [2024-07-15 07:54:56.618907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:46:18.014 [2024-07-15 07:54:56.618923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:46:18.014 [2024-07-15 07:54:56.618935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:46:18.014 [2024-07-15 07:54:56.618951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:46:18.014 [2024-07-15 07:54:56.618965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:46:18.014 [2024-07-15 07:54:56.618983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:46:18.014 [2024-07-15 07:54:56.618995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:46:18.014 [2024-07-15 07:54:56.619011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:46:18.014 [2024-07-15 07:54:56.619024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:46:18.014 [2024-07-15 07:54:56.619038] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:46:18.014 [2024-07-15 07:54:56.619056] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:18.014 [2024-07-15 07:54:56.619072] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:46:18.014 [2024-07-15 07:54:56.619085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:46:18.014 [2024-07-15 07:54:56.619100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:46:18.014 [2024-07-15 07:54:56.619112] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:46:18.014 [2024-07-15 07:54:56.619129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:18.014 [2024-07-15 07:54:56.619141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:46:18.014 [2024-07-15 07:54:56.619171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.219 ms 00:46:18.014 [2024-07-15 07:54:56.619183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:18.014 [2024-07-15 07:54:56.619254] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:46:18.014 [2024-07-15 07:54:56.619272] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:46:20.567 [2024-07-15 07:54:58.995832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:20.567 [2024-07-15 07:54:58.995963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:46:20.567 [2024-07-15 07:54:58.996019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2376.583 ms 00:46:20.567 [2024-07-15 07:54:58.996034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:20.567 [2024-07-15 07:54:59.040442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:20.567 [2024-07-15 07:54:59.040540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:46:20.567 [2024-07-15 07:54:59.040584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.066 ms 00:46:20.567 [2024-07-15 07:54:59.040607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:20.567 [2024-07-15 07:54:59.040811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:20.567 [2024-07-15 07:54:59.040832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:46:20.567 [2024-07-15 07:54:59.040850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:46:20.567 [2024-07-15 07:54:59.040883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:20.567 [2024-07-15 07:54:59.090079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:20.567 [2024-07-15 07:54:59.090153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:46:20.567 [2024-07-15 07:54:59.090195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 49.128 ms 00:46:20.567 [2024-07-15 07:54:59.090209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:20.567 [2024-07-15 07:54:59.090288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:20.567 [2024-07-15 07:54:59.090307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:46:20.567 [2024-07-15 07:54:59.090339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:46:20.567 [2024-07-15 07:54:59.090351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:20.567 [2024-07-15 07:54:59.091286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:20.567 [2024-07-15 07:54:59.091326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:46:20.567 [2024-07-15 07:54:59.091348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.829 ms 00:46:20.567 [2024-07-15 07:54:59.091360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:20.567 [2024-07-15 07:54:59.091440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:20.567 [2024-07-15 07:54:59.091474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:46:20.567 [2024-07-15 07:54:59.091495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:46:20.567 [2024-07-15 07:54:59.091507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:20.567 [2024-07-15 07:54:59.118096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:20.567 [2024-07-15 07:54:59.118156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:46:20.567 [2024-07-15 07:54:59.118197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.541 ms 00:46:20.567 [2024-07-15 07:54:59.118209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:20.567 [2024-07-15 07:54:59.134064] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:46:20.567 [2024-07-15 07:54:59.136075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:20.567 [2024-07-15 07:54:59.136132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:46:20.567 [2024-07-15 07:54:59.136151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.709 ms 00:46:20.567 [2024-07-15 07:54:59.136165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:20.826 [2024-07-15 07:54:59.179634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:20.826 [2024-07-15 07:54:59.179720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:46:20.826 [2024-07-15 07:54:59.179742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.428 ms 00:46:20.826 [2024-07-15 07:54:59.179760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:20.826 [2024-07-15 07:54:59.179910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:20.826 [2024-07-15 07:54:59.179940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:46:20.826 [2024-07-15 07:54:59.179955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.079 ms 00:46:20.826 [2024-07-15 07:54:59.179974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:20.826 [2024-07-15 07:54:59.210115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:20.826 [2024-07-15 07:54:59.210215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:46:20.826 [2024-07-15 07:54:59.210236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.072 ms 00:46:20.826 [2024-07-15 07:54:59.210252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:20.826 [2024-07-15 07:54:59.242243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:20.826 [2024-07-15 07:54:59.242345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:46:20.826 [2024-07-15 07:54:59.242368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.932 ms 00:46:20.826 [2024-07-15 07:54:59.242384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:20.826 [2024-07-15 07:54:59.243457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:20.826 [2024-07-15 07:54:59.243541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:46:20.826 [2024-07-15 07:54:59.243560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.017 ms 00:46:20.826 [2024-07-15 07:54:59.243581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:20.826 [2024-07-15 07:54:59.337678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:20.826 [2024-07-15 07:54:59.337779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:46:20.826 [2024-07-15 07:54:59.337818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 94.026 ms 00:46:20.826 [2024-07-15 07:54:59.337838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:20.826 [2024-07-15 07:54:59.370963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:20.826 [2024-07-15 07:54:59.371050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:46:20.826 [2024-07-15 07:54:59.371075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.065 ms 00:46:20.826 [2024-07-15 07:54:59.371091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:20.826 [2024-07-15 07:54:59.400299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:20.826 [2024-07-15 07:54:59.400362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:46:20.826 [2024-07-15 07:54:59.400394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.145 ms 00:46:20.826 [2024-07-15 07:54:59.400409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:20.826 [2024-07-15 07:54:59.429183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:20.826 [2024-07-15 07:54:59.429255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:46:20.826 [2024-07-15 07:54:59.429274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.704 ms 00:46:20.826 [2024-07-15 07:54:59.429288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:20.826 [2024-07-15 07:54:59.429350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:20.826 [2024-07-15 07:54:59.429373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:46:20.826 [2024-07-15 07:54:59.429386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:46:20.826 [2024-07-15 07:54:59.429404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:20.826 [2024-07-15 07:54:59.429576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:20.826 [2024-07-15 07:54:59.429605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:46:20.826 [2024-07-15 07:54:59.429622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:46:20.826 [2024-07-15 07:54:59.429637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:20.826 [2024-07-15 07:54:59.431285] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2838.458 ms, result 0 00:46:20.826 { 00:46:20.826 "name": "ftl", 00:46:20.826 "uuid": "0d34625b-7442-4507-94cc-90222daa87ce" 00:46:20.826 } 00:46:21.085 07:54:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:46:21.344 [2024-07-15 07:54:59.726020] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:21.344 07:54:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:46:21.603 07:54:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:46:21.860 [2024-07-15 07:55:00.230750] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:46:21.861 07:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:46:22.119 [2024-07-15 07:55:00.522441] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:46:22.119 07:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:46:22.376 Fill FTL, iteration 1 00:46:22.376 07:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:46:22.376 07:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:46:22.376 07:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:46:22.376 07:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:46:22.376 07:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:46:22.376 07:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:46:22.376 07:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:46:22.376 07:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:46:22.376 07:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:46:22.376 07:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:46:22.376 07:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:46:22.376 07:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:46:22.376 07:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:46:22.376 07:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:46:22.376 07:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:46:22.376 07:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:46:22.376 07:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=86551 00:46:22.376 07:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:46:22.376 07:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:46:22.376 07:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 86551 /var/tmp/spdk.tgt.sock 00:46:22.376 07:55:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 86551 ']' 00:46:22.376 07:55:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:46:22.377 07:55:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:46:22.377 07:55:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:46:22.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:46:22.377 07:55:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:46:22.377 07:55:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:46:22.634 [2024-07-15 07:55:01.009138] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:46:22.634 [2024-07-15 07:55:01.009635] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86551 ] 00:46:22.634 [2024-07-15 07:55:01.184243] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:23.208 [2024-07-15 07:55:01.533214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:46:24.143 07:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:46:24.143 07:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:46:24.143 07:55:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:46:24.143 ftln1 00:46:24.143 07:55:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:46:24.143 07:55:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:46:24.402 07:55:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:46:24.402 07:55:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 86551 00:46:24.402 07:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 86551 ']' 00:46:24.402 07:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 86551 00:46:24.402 07:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:46:24.402 07:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:46:24.402 07:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86551 00:46:24.402 killing process with pid 86551 00:46:24.402 07:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:46:24.402 07:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:46:24.402 07:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86551' 00:46:24.402 07:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 86551 00:46:24.402 07:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 86551 00:46:26.931 07:55:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:46:26.931 07:55:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:46:26.931 [2024-07-15 07:55:05.456182] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:46:26.931 [2024-07-15 07:55:05.456380] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86604 ] 00:46:27.189 [2024-07-15 07:55:05.635591] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:27.449 [2024-07-15 07:55:05.915458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:46:34.012  Copying: 209/1024 [MB] (209 MBps) Copying: 421/1024 [MB] (212 MBps) Copying: 634/1024 [MB] (213 MBps) Copying: 849/1024 [MB] (215 MBps) Copying: 1024/1024 [MB] (average 212 MBps) 00:46:34.012 00:46:34.012 Calculate MD5 checksum, iteration 1 00:46:34.012 07:55:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:46:34.012 07:55:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:46:34.012 07:55:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:46:34.012 07:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:46:34.012 07:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:46:34.012 07:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:46:34.012 07:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:46:34.012 07:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:46:34.366 [2024-07-15 07:55:12.650268] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:46:34.366 [2024-07-15 07:55:12.650499] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86674 ] 00:46:34.366 [2024-07-15 07:55:12.826719] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:34.646 [2024-07-15 07:55:13.101752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:46:38.349  Copying: 470/1024 [MB] (470 MBps) Copying: 948/1024 [MB] (478 MBps) Copying: 1024/1024 [MB] (average 472 MBps) 00:46:38.349 00:46:38.349 07:55:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:46:38.349 07:55:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:46:40.906 07:55:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:46:40.906 Fill FTL, iteration 2 00:46:40.906 07:55:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=886630e2211e0b5a529f597f637537e4 00:46:40.906 07:55:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:46:40.906 07:55:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:46:40.906 07:55:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:46:40.906 07:55:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:46:40.906 07:55:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:46:40.906 07:55:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:46:40.906 07:55:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:46:40.906 07:55:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:46:40.906 07:55:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:46:40.906 [2024-07-15 07:55:19.131264] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:46:40.906 [2024-07-15 07:55:19.131608] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86741 ] 00:46:40.906 [2024-07-15 07:55:19.317628] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:41.165 [2024-07-15 07:55:19.619966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:46:48.127  Copying: 194/1024 [MB] (194 MBps) Copying: 397/1024 [MB] (203 MBps) Copying: 609/1024 [MB] (212 MBps) Copying: 814/1024 [MB] (205 MBps) Copying: 1024/1024 [MB] (average 204 MBps) 00:46:48.127 00:46:48.127 Calculate MD5 checksum, iteration 2 00:46:48.127 07:55:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:46:48.127 07:55:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:46:48.127 07:55:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:46:48.127 07:55:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:46:48.127 07:55:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:46:48.127 07:55:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:46:48.127 07:55:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:46:48.127 07:55:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:46:48.127 [2024-07-15 07:55:26.510961] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:46:48.127 [2024-07-15 07:55:26.511152] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86817 ] 00:46:48.127 [2024-07-15 07:55:26.687926] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:48.385 [2024-07-15 07:55:26.986881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:46:53.161  Copying: 509/1024 [MB] (509 MBps) Copying: 975/1024 [MB] (466 MBps) Copying: 1024/1024 [MB] (average 486 MBps) 00:46:53.161 00:46:53.161 07:55:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:46:53.161 07:55:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:46:55.065 07:55:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:46:55.065 07:55:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=dc3b4d6274abec43503420efb1f907b4 00:46:55.065 07:55:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:46:55.065 07:55:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:46:55.065 07:55:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:46:55.323 [2024-07-15 07:55:33.827752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:55.323 [2024-07-15 07:55:33.827840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:46:55.323 [2024-07-15 07:55:33.827897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:46:55.323 [2024-07-15 07:55:33.827910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:55.323 [2024-07-15 07:55:33.827954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:55.323 [2024-07-15 07:55:33.827979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:46:55.323 [2024-07-15 07:55:33.827994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:46:55.323 [2024-07-15 07:55:33.828017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:55.323 [2024-07-15 07:55:33.828050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:55.323 [2024-07-15 07:55:33.828066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:46:55.323 [2024-07-15 07:55:33.828095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:46:55.323 [2024-07-15 07:55:33.828107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:55.323 [2024-07-15 07:55:33.828197] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.462 ms, result 0 00:46:55.323 true 00:46:55.323 07:55:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:46:55.582 { 00:46:55.582 "name": "ftl", 00:46:55.582 "properties": [ 00:46:55.582 { 00:46:55.582 "name": "superblock_version", 00:46:55.582 "value": 5, 00:46:55.582 "read-only": true 00:46:55.582 }, 00:46:55.582 { 00:46:55.582 "name": "base_device", 00:46:55.582 "bands": [ 00:46:55.582 { 00:46:55.582 "id": 0, 00:46:55.582 "state": "FREE", 00:46:55.582 "validity": 0.0 00:46:55.582 }, 00:46:55.582 { 00:46:55.582 "id": 1, 00:46:55.582 "state": "FREE", 00:46:55.582 "validity": 0.0 00:46:55.582 }, 00:46:55.582 { 00:46:55.582 "id": 2, 00:46:55.582 "state": "FREE", 00:46:55.582 "validity": 0.0 00:46:55.582 }, 00:46:55.582 { 00:46:55.582 "id": 3, 00:46:55.582 "state": "FREE", 00:46:55.582 "validity": 0.0 00:46:55.582 }, 00:46:55.582 { 00:46:55.582 "id": 4, 00:46:55.582 "state": "FREE", 00:46:55.582 "validity": 0.0 00:46:55.582 }, 00:46:55.582 { 00:46:55.582 "id": 5, 00:46:55.582 "state": "FREE", 00:46:55.582 "validity": 0.0 00:46:55.582 }, 00:46:55.582 { 00:46:55.582 "id": 6, 00:46:55.582 "state": "FREE", 00:46:55.582 "validity": 0.0 00:46:55.582 }, 00:46:55.582 { 00:46:55.582 "id": 7, 00:46:55.582 "state": "FREE", 00:46:55.582 "validity": 0.0 00:46:55.582 }, 00:46:55.582 { 00:46:55.582 "id": 8, 00:46:55.582 "state": "FREE", 00:46:55.582 "validity": 0.0 00:46:55.582 }, 00:46:55.582 { 00:46:55.582 "id": 9, 00:46:55.582 "state": "FREE", 00:46:55.582 "validity": 0.0 00:46:55.582 }, 00:46:55.582 { 00:46:55.582 "id": 10, 00:46:55.582 "state": "FREE", 00:46:55.582 "validity": 0.0 00:46:55.582 }, 00:46:55.582 { 00:46:55.582 "id": 11, 00:46:55.582 "state": "FREE", 00:46:55.582 "validity": 0.0 00:46:55.582 }, 00:46:55.582 { 00:46:55.582 "id": 12, 00:46:55.582 "state": "FREE", 00:46:55.582 "validity": 0.0 00:46:55.582 }, 00:46:55.582 { 00:46:55.582 "id": 13, 00:46:55.582 "state": "FREE", 00:46:55.582 "validity": 0.0 00:46:55.582 }, 00:46:55.582 { 00:46:55.582 "id": 14, 00:46:55.582 "state": "FREE", 00:46:55.582 "validity": 0.0 00:46:55.582 }, 00:46:55.582 { 00:46:55.582 "id": 15, 00:46:55.582 "state": "FREE", 00:46:55.582 "validity": 0.0 00:46:55.582 }, 00:46:55.582 { 00:46:55.582 "id": 16, 00:46:55.582 "state": "FREE", 00:46:55.582 "validity": 0.0 00:46:55.582 }, 00:46:55.582 { 00:46:55.582 "id": 17, 00:46:55.582 "state": "FREE", 00:46:55.582 "validity": 0.0 00:46:55.582 } 00:46:55.582 ], 00:46:55.582 "read-only": true 00:46:55.582 }, 00:46:55.582 { 00:46:55.582 "name": "cache_device", 00:46:55.582 "type": "bdev", 00:46:55.582 "chunks": [ 00:46:55.582 { 00:46:55.582 "id": 0, 00:46:55.582 "state": "INACTIVE", 00:46:55.582 "utilization": 0.0 00:46:55.582 }, 00:46:55.582 { 00:46:55.582 "id": 1, 00:46:55.582 "state": "CLOSED", 00:46:55.582 "utilization": 1.0 00:46:55.582 }, 00:46:55.582 { 00:46:55.582 "id": 2, 00:46:55.582 "state": "CLOSED", 00:46:55.582 "utilization": 1.0 00:46:55.582 }, 00:46:55.582 { 00:46:55.582 "id": 3, 00:46:55.582 "state": "OPEN", 00:46:55.582 "utilization": 0.001953125 00:46:55.582 }, 00:46:55.582 { 00:46:55.582 "id": 4, 00:46:55.582 "state": "OPEN", 00:46:55.582 "utilization": 0.0 00:46:55.582 } 00:46:55.582 ], 00:46:55.582 "read-only": true 00:46:55.582 }, 00:46:55.582 { 00:46:55.582 "name": "verbose_mode", 00:46:55.582 "value": true, 00:46:55.582 "unit": "", 00:46:55.582 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:46:55.582 }, 00:46:55.582 { 00:46:55.582 "name": "prep_upgrade_on_shutdown", 00:46:55.582 "value": false, 00:46:55.582 "unit": "", 00:46:55.582 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:46:55.582 } 00:46:55.582 ] 00:46:55.582 } 00:46:55.582 07:55:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:46:55.841 [2024-07-15 07:55:34.336648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:55.841 [2024-07-15 07:55:34.337005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:46:55.841 [2024-07-15 07:55:34.337168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:46:55.841 [2024-07-15 07:55:34.337329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:55.841 [2024-07-15 07:55:34.337423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:55.841 [2024-07-15 07:55:34.337546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:46:55.841 [2024-07-15 07:55:34.337673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:46:55.841 [2024-07-15 07:55:34.337854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:55.841 [2024-07-15 07:55:34.338038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:55.841 [2024-07-15 07:55:34.338199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:46:55.841 [2024-07-15 07:55:34.338226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:46:55.841 [2024-07-15 07:55:34.338238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:55.841 [2024-07-15 07:55:34.338338] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 1.674 ms, result 0 00:46:55.841 true 00:46:55.841 07:55:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:46:55.841 07:55:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:46:55.841 07:55:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:46:56.099 07:55:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:46:56.099 07:55:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:46:56.099 07:55:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:46:56.358 [2024-07-15 07:55:34.925346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:56.358 [2024-07-15 07:55:34.925432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:46:56.358 [2024-07-15 07:55:34.925509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:46:56.358 [2024-07-15 07:55:34.925525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:56.358 [2024-07-15 07:55:34.925568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:56.358 [2024-07-15 07:55:34.925585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:46:56.358 [2024-07-15 07:55:34.925599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:46:56.358 [2024-07-15 07:55:34.925627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:56.358 [2024-07-15 07:55:34.925656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:56.358 [2024-07-15 07:55:34.925671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:46:56.358 [2024-07-15 07:55:34.925684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:46:56.358 [2024-07-15 07:55:34.925696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:56.358 [2024-07-15 07:55:34.925781] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.440 ms, result 0 00:46:56.358 true 00:46:56.358 07:55:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:46:56.618 { 00:46:56.618 "name": "ftl", 00:46:56.618 "properties": [ 00:46:56.618 { 00:46:56.618 "name": "superblock_version", 00:46:56.618 "value": 5, 00:46:56.618 "read-only": true 00:46:56.618 }, 00:46:56.618 { 00:46:56.618 "name": "base_device", 00:46:56.618 "bands": [ 00:46:56.618 { 00:46:56.618 "id": 0, 00:46:56.618 "state": "FREE", 00:46:56.618 "validity": 0.0 00:46:56.618 }, 00:46:56.618 { 00:46:56.618 "id": 1, 00:46:56.618 "state": "FREE", 00:46:56.618 "validity": 0.0 00:46:56.618 }, 00:46:56.618 { 00:46:56.618 "id": 2, 00:46:56.618 "state": "FREE", 00:46:56.618 "validity": 0.0 00:46:56.618 }, 00:46:56.618 { 00:46:56.618 "id": 3, 00:46:56.618 "state": "FREE", 00:46:56.618 "validity": 0.0 00:46:56.618 }, 00:46:56.618 { 00:46:56.618 "id": 4, 00:46:56.618 "state": "FREE", 00:46:56.618 "validity": 0.0 00:46:56.618 }, 00:46:56.618 { 00:46:56.618 "id": 5, 00:46:56.618 "state": "FREE", 00:46:56.618 "validity": 0.0 00:46:56.618 }, 00:46:56.618 { 00:46:56.618 "id": 6, 00:46:56.618 "state": "FREE", 00:46:56.618 "validity": 0.0 00:46:56.618 }, 00:46:56.618 { 00:46:56.618 "id": 7, 00:46:56.618 "state": "FREE", 00:46:56.618 "validity": 0.0 00:46:56.618 }, 00:46:56.618 { 00:46:56.618 "id": 8, 00:46:56.618 "state": "FREE", 00:46:56.618 "validity": 0.0 00:46:56.618 }, 00:46:56.618 { 00:46:56.618 "id": 9, 00:46:56.618 "state": "FREE", 00:46:56.618 "validity": 0.0 00:46:56.618 }, 00:46:56.618 { 00:46:56.618 "id": 10, 00:46:56.618 "state": "FREE", 00:46:56.618 "validity": 0.0 00:46:56.618 }, 00:46:56.618 { 00:46:56.618 "id": 11, 00:46:56.618 "state": "FREE", 00:46:56.618 "validity": 0.0 00:46:56.618 }, 00:46:56.618 { 00:46:56.618 "id": 12, 00:46:56.618 "state": "FREE", 00:46:56.618 "validity": 0.0 00:46:56.618 }, 00:46:56.618 { 00:46:56.618 "id": 13, 00:46:56.618 "state": "FREE", 00:46:56.618 "validity": 0.0 00:46:56.618 }, 00:46:56.618 { 00:46:56.618 "id": 14, 00:46:56.618 "state": "FREE", 00:46:56.618 "validity": 0.0 00:46:56.618 }, 00:46:56.618 { 00:46:56.618 "id": 15, 00:46:56.618 "state": "FREE", 00:46:56.618 "validity": 0.0 00:46:56.618 }, 00:46:56.618 { 00:46:56.618 "id": 16, 00:46:56.618 "state": "FREE", 00:46:56.618 "validity": 0.0 00:46:56.618 }, 00:46:56.618 { 00:46:56.618 "id": 17, 00:46:56.618 "state": "FREE", 00:46:56.618 "validity": 0.0 00:46:56.618 } 00:46:56.618 ], 00:46:56.618 "read-only": true 00:46:56.618 }, 00:46:56.618 { 00:46:56.618 "name": "cache_device", 00:46:56.618 "type": "bdev", 00:46:56.618 "chunks": [ 00:46:56.618 { 00:46:56.618 "id": 0, 00:46:56.618 "state": "INACTIVE", 00:46:56.618 "utilization": 0.0 00:46:56.618 }, 00:46:56.618 { 00:46:56.618 "id": 1, 00:46:56.618 "state": "CLOSED", 00:46:56.618 "utilization": 1.0 00:46:56.618 }, 00:46:56.618 { 00:46:56.618 "id": 2, 00:46:56.618 "state": "CLOSED", 00:46:56.618 "utilization": 1.0 00:46:56.618 }, 00:46:56.618 { 00:46:56.618 "id": 3, 00:46:56.618 "state": "OPEN", 00:46:56.618 "utilization": 0.001953125 00:46:56.618 }, 00:46:56.618 { 00:46:56.618 "id": 4, 00:46:56.618 "state": "OPEN", 00:46:56.618 "utilization": 0.0 00:46:56.618 } 00:46:56.618 ], 00:46:56.618 "read-only": true 00:46:56.618 }, 00:46:56.618 { 00:46:56.618 "name": "verbose_mode", 00:46:56.618 "value": true, 00:46:56.618 "unit": "", 00:46:56.618 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:46:56.618 }, 00:46:56.618 { 00:46:56.618 "name": "prep_upgrade_on_shutdown", 00:46:56.618 "value": true, 00:46:56.618 "unit": "", 00:46:56.618 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:46:56.618 } 00:46:56.618 ] 00:46:56.618 } 00:46:56.618 07:55:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:46:56.618 07:55:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 86423 ]] 00:46:56.618 07:55:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 86423 00:46:56.618 07:55:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 86423 ']' 00:46:56.618 07:55:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 86423 00:46:56.618 07:55:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:46:56.618 07:55:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:46:56.618 07:55:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86423 00:46:56.618 killing process with pid 86423 00:46:56.618 07:55:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:46:56.618 07:55:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:46:56.618 07:55:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86423' 00:46:56.618 07:55:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 86423 00:46:56.618 07:55:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 86423 00:46:58.043 [2024-07-15 07:55:36.356978] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:46:58.043 [2024-07-15 07:55:36.376084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:58.043 [2024-07-15 07:55:36.376158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:46:58.043 [2024-07-15 07:55:36.376181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:46:58.043 [2024-07-15 07:55:36.376193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:58.043 [2024-07-15 07:55:36.376227] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:46:58.043 [2024-07-15 07:55:36.380438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:58.043 [2024-07-15 07:55:36.380498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:46:58.043 [2024-07-15 07:55:36.380532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.183 ms 00:46:58.043 [2024-07-15 07:55:36.380544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:08.017 [2024-07-15 07:55:45.361889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:08.017 [2024-07-15 07:55:45.361981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:47:08.017 [2024-07-15 07:55:45.362022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8981.353 ms 00:47:08.017 [2024-07-15 07:55:45.362035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:08.017 [2024-07-15 07:55:45.363419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:08.017 [2024-07-15 07:55:45.363468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:47:08.017 [2024-07-15 07:55:45.363493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.357 ms 00:47:08.017 [2024-07-15 07:55:45.363506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:08.017 [2024-07-15 07:55:45.364754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:08.017 [2024-07-15 07:55:45.364817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:47:08.017 [2024-07-15 07:55:45.364850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.200 ms 00:47:08.017 [2024-07-15 07:55:45.364862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:08.017 [2024-07-15 07:55:45.379164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:08.017 [2024-07-15 07:55:45.379211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:47:08.017 [2024-07-15 07:55:45.379237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.255 ms 00:47:08.017 [2024-07-15 07:55:45.379255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:08.017 [2024-07-15 07:55:45.387675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:08.017 [2024-07-15 07:55:45.387750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:47:08.017 [2024-07-15 07:55:45.387784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.377 ms 00:47:08.017 [2024-07-15 07:55:45.387796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:08.017 [2024-07-15 07:55:45.387963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:08.017 [2024-07-15 07:55:45.387982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:47:08.017 [2024-07-15 07:55:45.387995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.122 ms 00:47:08.017 [2024-07-15 07:55:45.388006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:08.017 [2024-07-15 07:55:45.400384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:08.017 [2024-07-15 07:55:45.400423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:47:08.017 [2024-07-15 07:55:45.400440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.341 ms 00:47:08.017 [2024-07-15 07:55:45.400466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:08.017 [2024-07-15 07:55:45.413449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:08.017 [2024-07-15 07:55:45.413520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:47:08.017 [2024-07-15 07:55:45.413539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.938 ms 00:47:08.017 [2024-07-15 07:55:45.413551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:08.017 [2024-07-15 07:55:45.427226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:08.017 [2024-07-15 07:55:45.427324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:47:08.017 [2024-07-15 07:55:45.427344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.615 ms 00:47:08.017 [2024-07-15 07:55:45.427356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:08.017 [2024-07-15 07:55:45.440351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:08.017 [2024-07-15 07:55:45.440434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:47:08.017 [2024-07-15 07:55:45.440471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.855 ms 00:47:08.017 [2024-07-15 07:55:45.440486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:08.017 [2024-07-15 07:55:45.440538] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:47:08.017 [2024-07-15 07:55:45.440565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:47:08.017 [2024-07-15 07:55:45.440597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:47:08.017 [2024-07-15 07:55:45.440611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:47:08.017 [2024-07-15 07:55:45.440625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:47:08.017 [2024-07-15 07:55:45.440639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:47:08.018 [2024-07-15 07:55:45.440652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:47:08.018 [2024-07-15 07:55:45.440665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:47:08.018 [2024-07-15 07:55:45.440678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:47:08.018 [2024-07-15 07:55:45.440691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:47:08.018 [2024-07-15 07:55:45.440703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:47:08.018 [2024-07-15 07:55:45.440716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:47:08.018 [2024-07-15 07:55:45.440739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:47:08.018 [2024-07-15 07:55:45.440751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:47:08.018 [2024-07-15 07:55:45.440764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:47:08.018 [2024-07-15 07:55:45.440777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:47:08.018 [2024-07-15 07:55:45.440806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:47:08.018 [2024-07-15 07:55:45.440819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:47:08.018 [2024-07-15 07:55:45.440844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:47:08.018 [2024-07-15 07:55:45.440861] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:47:08.018 [2024-07-15 07:55:45.440873] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 0d34625b-7442-4507-94cc-90222daa87ce 00:47:08.018 [2024-07-15 07:55:45.440886] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:47:08.018 [2024-07-15 07:55:45.440898] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:47:08.018 [2024-07-15 07:55:45.440909] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:47:08.018 [2024-07-15 07:55:45.440922] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:47:08.018 [2024-07-15 07:55:45.440933] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:47:08.018 [2024-07-15 07:55:45.440946] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:47:08.018 [2024-07-15 07:55:45.440958] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:47:08.018 [2024-07-15 07:55:45.440969] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:47:08.018 [2024-07-15 07:55:45.440979] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:47:08.018 [2024-07-15 07:55:45.440990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:08.018 [2024-07-15 07:55:45.441003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:47:08.018 [2024-07-15 07:55:45.441016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.455 ms 00:47:08.018 [2024-07-15 07:55:45.441035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:08.018 [2024-07-15 07:55:45.459311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:08.018 [2024-07-15 07:55:45.459634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:47:08.018 [2024-07-15 07:55:45.459785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.245 ms 00:47:08.018 [2024-07-15 07:55:45.459812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:08.018 [2024-07-15 07:55:45.460426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:08.018 [2024-07-15 07:55:45.460446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:47:08.018 [2024-07-15 07:55:45.460494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.535 ms 00:47:08.018 [2024-07-15 07:55:45.460506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:08.018 [2024-07-15 07:55:45.518116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:08.018 [2024-07-15 07:55:45.518219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:47:08.018 [2024-07-15 07:55:45.518256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:08.018 [2024-07-15 07:55:45.518268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:08.018 [2024-07-15 07:55:45.518361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:08.018 [2024-07-15 07:55:45.518376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:47:08.018 [2024-07-15 07:55:45.518398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:08.018 [2024-07-15 07:55:45.518411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:08.018 [2024-07-15 07:55:45.518569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:08.018 [2024-07-15 07:55:45.518590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:47:08.018 [2024-07-15 07:55:45.518604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:08.018 [2024-07-15 07:55:45.518616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:08.018 [2024-07-15 07:55:45.518658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:08.018 [2024-07-15 07:55:45.518680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:47:08.018 [2024-07-15 07:55:45.518692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:08.018 [2024-07-15 07:55:45.518709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:08.018 [2024-07-15 07:55:45.633923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:08.018 [2024-07-15 07:55:45.634008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:47:08.018 [2024-07-15 07:55:45.634028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:08.018 [2024-07-15 07:55:45.634041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:08.018 [2024-07-15 07:55:45.726841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:08.018 [2024-07-15 07:55:45.726944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:47:08.018 [2024-07-15 07:55:45.726975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:08.018 [2024-07-15 07:55:45.726988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:08.018 [2024-07-15 07:55:45.727115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:08.018 [2024-07-15 07:55:45.727135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:47:08.018 [2024-07-15 07:55:45.727149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:08.018 [2024-07-15 07:55:45.727162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:08.018 [2024-07-15 07:55:45.727235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:08.018 [2024-07-15 07:55:45.727252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:47:08.018 [2024-07-15 07:55:45.727265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:08.018 [2024-07-15 07:55:45.727277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:08.018 [2024-07-15 07:55:45.727417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:08.018 [2024-07-15 07:55:45.727437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:47:08.018 [2024-07-15 07:55:45.727473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:08.018 [2024-07-15 07:55:45.727491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:08.018 [2024-07-15 07:55:45.727556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:08.018 [2024-07-15 07:55:45.727575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:47:08.018 [2024-07-15 07:55:45.727588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:08.018 [2024-07-15 07:55:45.727601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:08.018 [2024-07-15 07:55:45.727664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:08.018 [2024-07-15 07:55:45.727681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:47:08.018 [2024-07-15 07:55:45.727694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:08.018 [2024-07-15 07:55:45.727706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:08.018 [2024-07-15 07:55:45.727770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:08.018 [2024-07-15 07:55:45.727788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:47:08.018 [2024-07-15 07:55:45.727801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:08.018 [2024-07-15 07:55:45.727823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:08.018 [2024-07-15 07:55:45.728024] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9351.938 ms, result 0 00:47:12.221 07:55:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:47:12.221 07:55:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:47:12.221 07:55:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:47:12.221 07:55:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:47:12.221 07:55:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:47:12.221 07:55:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=87044 00:47:12.221 07:55:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:47:12.221 07:55:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:47:12.221 07:55:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 87044 00:47:12.221 07:55:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 87044 ']' 00:47:12.221 07:55:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:12.221 07:55:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:12.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:12.221 07:55:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:12.221 07:55:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:12.221 07:55:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:47:12.221 [2024-07-15 07:55:50.700857] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:47:12.221 [2024-07-15 07:55:50.701064] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87044 ] 00:47:12.480 [2024-07-15 07:55:50.872849] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:12.738 [2024-07-15 07:55:51.135145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:13.672 [2024-07-15 07:55:52.110473] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:47:13.672 [2024-07-15 07:55:52.110581] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:47:13.672 [2024-07-15 07:55:52.264224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:13.672 [2024-07-15 07:55:52.264291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:47:13.672 [2024-07-15 07:55:52.264334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:47:13.672 [2024-07-15 07:55:52.264345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:13.672 [2024-07-15 07:55:52.264428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:13.672 [2024-07-15 07:55:52.264447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:47:13.672 [2024-07-15 07:55:52.264460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:47:13.672 [2024-07-15 07:55:52.264522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:13.672 [2024-07-15 07:55:52.264592] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:47:13.672 [2024-07-15 07:55:52.265609] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:47:13.672 [2024-07-15 07:55:52.265654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:13.672 [2024-07-15 07:55:52.265670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:47:13.672 [2024-07-15 07:55:52.265684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.071 ms 00:47:13.672 [2024-07-15 07:55:52.265696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:13.672 [2024-07-15 07:55:52.268371] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:47:13.931 [2024-07-15 07:55:52.287931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:13.931 [2024-07-15 07:55:52.287999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:47:13.931 [2024-07-15 07:55:52.288018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.560 ms 00:47:13.931 [2024-07-15 07:55:52.288029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:13.931 [2024-07-15 07:55:52.288111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:13.931 [2024-07-15 07:55:52.288130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:47:13.931 [2024-07-15 07:55:52.288142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:47:13.931 [2024-07-15 07:55:52.288153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:13.931 [2024-07-15 07:55:52.302636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:13.931 [2024-07-15 07:55:52.302727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:47:13.931 [2024-07-15 07:55:52.302749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.360 ms 00:47:13.931 [2024-07-15 07:55:52.302763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:13.931 [2024-07-15 07:55:52.302962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:13.931 [2024-07-15 07:55:52.302986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:47:13.931 [2024-07-15 07:55:52.303007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.134 ms 00:47:13.931 [2024-07-15 07:55:52.303020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:13.931 [2024-07-15 07:55:52.303153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:13.931 [2024-07-15 07:55:52.303172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:47:13.931 [2024-07-15 07:55:52.303186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:47:13.931 [2024-07-15 07:55:52.303198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:13.931 [2024-07-15 07:55:52.303264] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:47:13.931 [2024-07-15 07:55:52.309665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:13.931 [2024-07-15 07:55:52.309706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:47:13.931 [2024-07-15 07:55:52.309740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.412 ms 00:47:13.931 [2024-07-15 07:55:52.309753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:13.931 [2024-07-15 07:55:52.309834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:13.931 [2024-07-15 07:55:52.309867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:47:13.931 [2024-07-15 07:55:52.309886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:47:13.931 [2024-07-15 07:55:52.309896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:13.931 [2024-07-15 07:55:52.309958] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:47:13.931 [2024-07-15 07:55:52.309994] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:47:13.931 [2024-07-15 07:55:52.310039] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:47:13.931 [2024-07-15 07:55:52.310061] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:47:13.931 [2024-07-15 07:55:52.310164] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:47:13.931 [2024-07-15 07:55:52.310184] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:47:13.931 [2024-07-15 07:55:52.310199] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:47:13.931 [2024-07-15 07:55:52.310219] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:47:13.931 [2024-07-15 07:55:52.310232] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:47:13.931 [2024-07-15 07:55:52.310245] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:47:13.931 [2024-07-15 07:55:52.310257] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:47:13.931 [2024-07-15 07:55:52.310268] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:47:13.931 [2024-07-15 07:55:52.310279] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:47:13.931 [2024-07-15 07:55:52.310291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:13.931 [2024-07-15 07:55:52.310302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:47:13.931 [2024-07-15 07:55:52.310314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.339 ms 00:47:13.931 [2024-07-15 07:55:52.310328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:13.931 [2024-07-15 07:55:52.310421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:13.931 [2024-07-15 07:55:52.310436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:47:13.931 [2024-07-15 07:55:52.310453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:47:13.931 [2024-07-15 07:55:52.310481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:13.931 [2024-07-15 07:55:52.310638] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:47:13.931 [2024-07-15 07:55:52.310661] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:47:13.931 [2024-07-15 07:55:52.310674] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:47:13.931 [2024-07-15 07:55:52.310686] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:13.931 [2024-07-15 07:55:52.310705] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:47:13.931 [2024-07-15 07:55:52.310715] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:47:13.931 [2024-07-15 07:55:52.310726] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:47:13.931 [2024-07-15 07:55:52.310737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:47:13.931 [2024-07-15 07:55:52.310749] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:47:13.931 [2024-07-15 07:55:52.310759] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:13.931 [2024-07-15 07:55:52.310770] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:47:13.931 [2024-07-15 07:55:52.310780] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:47:13.931 [2024-07-15 07:55:52.310791] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:13.931 [2024-07-15 07:55:52.310801] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:47:13.931 [2024-07-15 07:55:52.310818] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:47:13.931 [2024-07-15 07:55:52.310843] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:13.931 [2024-07-15 07:55:52.310853] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:47:13.931 [2024-07-15 07:55:52.310894] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:47:13.931 [2024-07-15 07:55:52.310907] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:13.931 [2024-07-15 07:55:52.310918] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:47:13.931 [2024-07-15 07:55:52.310929] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:47:13.931 [2024-07-15 07:55:52.310940] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:47:13.931 [2024-07-15 07:55:52.310951] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:47:13.931 [2024-07-15 07:55:52.310961] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:47:13.931 [2024-07-15 07:55:52.310972] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:47:13.931 [2024-07-15 07:55:52.310983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:47:13.931 [2024-07-15 07:55:52.310994] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:47:13.931 [2024-07-15 07:55:52.311004] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:47:13.931 [2024-07-15 07:55:52.311015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:47:13.931 [2024-07-15 07:55:52.311025] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:47:13.931 [2024-07-15 07:55:52.311036] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:47:13.931 [2024-07-15 07:55:52.311047] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:47:13.932 [2024-07-15 07:55:52.311059] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:47:13.932 [2024-07-15 07:55:52.311070] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:13.932 [2024-07-15 07:55:52.311081] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:47:13.932 [2024-07-15 07:55:52.311091] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:47:13.932 [2024-07-15 07:55:52.311102] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:13.932 [2024-07-15 07:55:52.311112] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:47:13.932 [2024-07-15 07:55:52.311123] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:47:13.932 [2024-07-15 07:55:52.311132] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:13.932 [2024-07-15 07:55:52.311143] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:47:13.932 [2024-07-15 07:55:52.311154] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:47:13.932 [2024-07-15 07:55:52.311164] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:13.932 [2024-07-15 07:55:52.311175] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:47:13.932 [2024-07-15 07:55:52.311188] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:47:13.932 [2024-07-15 07:55:52.311217] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:47:13.932 [2024-07-15 07:55:52.311244] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:13.932 [2024-07-15 07:55:52.311256] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:47:13.932 [2024-07-15 07:55:52.311267] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:47:13.932 [2024-07-15 07:55:52.311277] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:47:13.932 [2024-07-15 07:55:52.311287] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:47:13.932 [2024-07-15 07:55:52.311315] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:47:13.932 [2024-07-15 07:55:52.311326] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:47:13.932 [2024-07-15 07:55:52.311338] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:47:13.932 [2024-07-15 07:55:52.311366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:47:13.932 [2024-07-15 07:55:52.311379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:47:13.932 [2024-07-15 07:55:52.311390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:47:13.932 [2024-07-15 07:55:52.311401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:47:13.932 [2024-07-15 07:55:52.311412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:47:13.932 [2024-07-15 07:55:52.311422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:47:13.932 [2024-07-15 07:55:52.311432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:47:13.932 [2024-07-15 07:55:52.311442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:47:13.932 [2024-07-15 07:55:52.311452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:47:13.932 [2024-07-15 07:55:52.311479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:47:13.932 [2024-07-15 07:55:52.311507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:47:13.932 [2024-07-15 07:55:52.311519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:47:13.932 [2024-07-15 07:55:52.311531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:47:13.932 [2024-07-15 07:55:52.311996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:47:13.932 [2024-07-15 07:55:52.312079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:47:13.932 [2024-07-15 07:55:52.312217] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:47:13.932 [2024-07-15 07:55:52.312288] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:47:13.932 [2024-07-15 07:55:52.312444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:47:13.932 [2024-07-15 07:55:52.312539] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:47:13.932 [2024-07-15 07:55:52.312670] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:47:13.932 [2024-07-15 07:55:52.312733] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:47:13.932 [2024-07-15 07:55:52.312896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:13.932 [2024-07-15 07:55:52.313001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:47:13.932 [2024-07-15 07:55:52.313052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.336 ms 00:47:13.932 [2024-07-15 07:55:52.313154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:13.932 [2024-07-15 07:55:52.313307] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:47:13.932 [2024-07-15 07:55:52.313519] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:47:17.231 [2024-07-15 07:55:55.151429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.231 [2024-07-15 07:55:55.151853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:47:17.231 [2024-07-15 07:55:55.152017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2838.135 ms 00:47:17.231 [2024-07-15 07:55:55.152064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.231 [2024-07-15 07:55:55.194392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.231 [2024-07-15 07:55:55.194499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:47:17.231 [2024-07-15 07:55:55.194530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.004 ms 00:47:17.231 [2024-07-15 07:55:55.194543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.231 [2024-07-15 07:55:55.194752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.231 [2024-07-15 07:55:55.194772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:47:17.231 [2024-07-15 07:55:55.194786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:47:17.231 [2024-07-15 07:55:55.194799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.231 [2024-07-15 07:55:55.243199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.231 [2024-07-15 07:55:55.243294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:47:17.231 [2024-07-15 07:55:55.243315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 48.323 ms 00:47:17.231 [2024-07-15 07:55:55.243326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.231 [2024-07-15 07:55:55.243419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.231 [2024-07-15 07:55:55.243435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:47:17.231 [2024-07-15 07:55:55.243448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:47:17.231 [2024-07-15 07:55:55.243495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.231 [2024-07-15 07:55:55.244361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.231 [2024-07-15 07:55:55.244398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:47:17.231 [2024-07-15 07:55:55.244414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.758 ms 00:47:17.231 [2024-07-15 07:55:55.244425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.231 [2024-07-15 07:55:55.244551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.231 [2024-07-15 07:55:55.244569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:47:17.231 [2024-07-15 07:55:55.244582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:47:17.231 [2024-07-15 07:55:55.244593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.231 [2024-07-15 07:55:55.267709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.231 [2024-07-15 07:55:55.267776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:47:17.231 [2024-07-15 07:55:55.267796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.082 ms 00:47:17.231 [2024-07-15 07:55:55.267808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.231 [2024-07-15 07:55:55.285307] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:47:17.231 [2024-07-15 07:55:55.285351] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:47:17.231 [2024-07-15 07:55:55.285370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.231 [2024-07-15 07:55:55.285383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:47:17.231 [2024-07-15 07:55:55.285397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.365 ms 00:47:17.231 [2024-07-15 07:55:55.285407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.231 [2024-07-15 07:55:55.301843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.231 [2024-07-15 07:55:55.301885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:47:17.231 [2024-07-15 07:55:55.301918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.372 ms 00:47:17.231 [2024-07-15 07:55:55.301930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.231 [2024-07-15 07:55:55.315937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.231 [2024-07-15 07:55:55.315974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:47:17.232 [2024-07-15 07:55:55.315990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.942 ms 00:47:17.232 [2024-07-15 07:55:55.316000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.232 [2024-07-15 07:55:55.330300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.232 [2024-07-15 07:55:55.330337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:47:17.232 [2024-07-15 07:55:55.330353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.256 ms 00:47:17.232 [2024-07-15 07:55:55.330363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.232 [2024-07-15 07:55:55.331298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.232 [2024-07-15 07:55:55.331335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:47:17.232 [2024-07-15 07:55:55.331351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.779 ms 00:47:17.232 [2024-07-15 07:55:55.331385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.232 [2024-07-15 07:55:55.432517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.232 [2024-07-15 07:55:55.432612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:47:17.232 [2024-07-15 07:55:55.432653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 101.062 ms 00:47:17.232 [2024-07-15 07:55:55.432666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.232 [2024-07-15 07:55:55.445138] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:47:17.232 [2024-07-15 07:55:55.446753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.232 [2024-07-15 07:55:55.446785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:47:17.232 [2024-07-15 07:55:55.446853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.991 ms 00:47:17.232 [2024-07-15 07:55:55.446891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.232 [2024-07-15 07:55:55.447039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.232 [2024-07-15 07:55:55.447060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:47:17.232 [2024-07-15 07:55:55.447075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:47:17.232 [2024-07-15 07:55:55.447087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.232 [2024-07-15 07:55:55.447177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.232 [2024-07-15 07:55:55.447233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:47:17.232 [2024-07-15 07:55:55.447246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:47:17.232 [2024-07-15 07:55:55.447256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.232 [2024-07-15 07:55:55.447300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.232 [2024-07-15 07:55:55.447316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:47:17.232 [2024-07-15 07:55:55.447328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:47:17.232 [2024-07-15 07:55:55.447338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.232 [2024-07-15 07:55:55.447399] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:47:17.232 [2024-07-15 07:55:55.447415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.232 [2024-07-15 07:55:55.447427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:47:17.232 [2024-07-15 07:55:55.447440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:47:17.232 [2024-07-15 07:55:55.447450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.232 [2024-07-15 07:55:55.477752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.232 [2024-07-15 07:55:55.477833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:47:17.232 [2024-07-15 07:55:55.477872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.263 ms 00:47:17.232 [2024-07-15 07:55:55.477884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.232 [2024-07-15 07:55:55.477994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.232 [2024-07-15 07:55:55.478013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:47:17.232 [2024-07-15 07:55:55.478027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:47:17.232 [2024-07-15 07:55:55.478050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.232 [2024-07-15 07:55:55.480068] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3215.123 ms, result 0 00:47:17.232 [2024-07-15 07:55:55.494239] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:17.232 [2024-07-15 07:55:55.510229] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:47:17.232 [2024-07-15 07:55:55.518933] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:47:17.232 07:55:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:17.232 07:55:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:47:17.232 07:55:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:47:17.232 07:55:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:47:17.232 07:55:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:47:17.232 [2024-07-15 07:55:55.823249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.232 [2024-07-15 07:55:55.823354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:47:17.232 [2024-07-15 07:55:55.823394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:47:17.232 [2024-07-15 07:55:55.823407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.232 [2024-07-15 07:55:55.823452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.232 [2024-07-15 07:55:55.823469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:47:17.232 [2024-07-15 07:55:55.823515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:47:17.232 [2024-07-15 07:55:55.823547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.232 [2024-07-15 07:55:55.823578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.232 [2024-07-15 07:55:55.823599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:47:17.232 [2024-07-15 07:55:55.823612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:47:17.232 [2024-07-15 07:55:55.823624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.232 [2024-07-15 07:55:55.823713] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.483 ms, result 0 00:47:17.232 true 00:47:17.512 07:55:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:47:17.512 { 00:47:17.512 "name": "ftl", 00:47:17.512 "properties": [ 00:47:17.512 { 00:47:17.512 "name": "superblock_version", 00:47:17.512 "value": 5, 00:47:17.512 "read-only": true 00:47:17.512 }, 00:47:17.512 { 00:47:17.512 "name": "base_device", 00:47:17.512 "bands": [ 00:47:17.512 { 00:47:17.512 "id": 0, 00:47:17.512 "state": "CLOSED", 00:47:17.512 "validity": 1.0 00:47:17.512 }, 00:47:17.512 { 00:47:17.512 "id": 1, 00:47:17.512 "state": "CLOSED", 00:47:17.512 "validity": 1.0 00:47:17.512 }, 00:47:17.512 { 00:47:17.512 "id": 2, 00:47:17.512 "state": "CLOSED", 00:47:17.512 "validity": 0.007843137254901933 00:47:17.512 }, 00:47:17.512 { 00:47:17.512 "id": 3, 00:47:17.512 "state": "FREE", 00:47:17.512 "validity": 0.0 00:47:17.512 }, 00:47:17.512 { 00:47:17.512 "id": 4, 00:47:17.512 "state": "FREE", 00:47:17.512 "validity": 0.0 00:47:17.512 }, 00:47:17.512 { 00:47:17.512 "id": 5, 00:47:17.512 "state": "FREE", 00:47:17.512 "validity": 0.0 00:47:17.512 }, 00:47:17.512 { 00:47:17.512 "id": 6, 00:47:17.512 "state": "FREE", 00:47:17.512 "validity": 0.0 00:47:17.512 }, 00:47:17.512 { 00:47:17.512 "id": 7, 00:47:17.512 "state": "FREE", 00:47:17.512 "validity": 0.0 00:47:17.512 }, 00:47:17.512 { 00:47:17.512 "id": 8, 00:47:17.512 "state": "FREE", 00:47:17.512 "validity": 0.0 00:47:17.512 }, 00:47:17.512 { 00:47:17.512 "id": 9, 00:47:17.512 "state": "FREE", 00:47:17.512 "validity": 0.0 00:47:17.513 }, 00:47:17.513 { 00:47:17.513 "id": 10, 00:47:17.513 "state": "FREE", 00:47:17.513 "validity": 0.0 00:47:17.513 }, 00:47:17.513 { 00:47:17.513 "id": 11, 00:47:17.513 "state": "FREE", 00:47:17.513 "validity": 0.0 00:47:17.513 }, 00:47:17.513 { 00:47:17.513 "id": 12, 00:47:17.513 "state": "FREE", 00:47:17.513 "validity": 0.0 00:47:17.513 }, 00:47:17.513 { 00:47:17.513 "id": 13, 00:47:17.513 "state": "FREE", 00:47:17.513 "validity": 0.0 00:47:17.513 }, 00:47:17.513 { 00:47:17.513 "id": 14, 00:47:17.513 "state": "FREE", 00:47:17.513 "validity": 0.0 00:47:17.513 }, 00:47:17.513 { 00:47:17.513 "id": 15, 00:47:17.513 "state": "FREE", 00:47:17.513 "validity": 0.0 00:47:17.513 }, 00:47:17.513 { 00:47:17.513 "id": 16, 00:47:17.513 "state": "FREE", 00:47:17.513 "validity": 0.0 00:47:17.513 }, 00:47:17.513 { 00:47:17.513 "id": 17, 00:47:17.513 "state": "FREE", 00:47:17.513 "validity": 0.0 00:47:17.513 } 00:47:17.513 ], 00:47:17.513 "read-only": true 00:47:17.513 }, 00:47:17.513 { 00:47:17.513 "name": "cache_device", 00:47:17.513 "type": "bdev", 00:47:17.513 "chunks": [ 00:47:17.513 { 00:47:17.513 "id": 0, 00:47:17.513 "state": "INACTIVE", 00:47:17.513 "utilization": 0.0 00:47:17.513 }, 00:47:17.513 { 00:47:17.513 "id": 1, 00:47:17.513 "state": "OPEN", 00:47:17.513 "utilization": 0.0 00:47:17.513 }, 00:47:17.513 { 00:47:17.513 "id": 2, 00:47:17.513 "state": "OPEN", 00:47:17.513 "utilization": 0.0 00:47:17.513 }, 00:47:17.513 { 00:47:17.513 "id": 3, 00:47:17.513 "state": "FREE", 00:47:17.513 "utilization": 0.0 00:47:17.513 }, 00:47:17.513 { 00:47:17.513 "id": 4, 00:47:17.513 "state": "FREE", 00:47:17.513 "utilization": 0.0 00:47:17.513 } 00:47:17.513 ], 00:47:17.513 "read-only": true 00:47:17.513 }, 00:47:17.513 { 00:47:17.513 "name": "verbose_mode", 00:47:17.513 "value": true, 00:47:17.513 "unit": "", 00:47:17.513 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:47:17.513 }, 00:47:17.513 { 00:47:17.513 "name": "prep_upgrade_on_shutdown", 00:47:17.513 "value": false, 00:47:17.513 "unit": "", 00:47:17.513 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:47:17.513 } 00:47:17.513 ] 00:47:17.513 } 00:47:17.513 07:55:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:47:17.513 07:55:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:47:17.513 07:55:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:47:17.771 07:55:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:47:17.771 07:55:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:47:17.771 07:55:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:47:17.771 07:55:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:47:17.771 07:55:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:47:18.029 Validate MD5 checksum, iteration 1 00:47:18.029 07:55:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:47:18.029 07:55:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:47:18.029 07:55:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:47:18.029 07:55:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:47:18.029 07:55:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:47:18.029 07:55:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:47:18.029 07:55:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:47:18.029 07:55:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:47:18.029 07:55:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:47:18.029 07:55:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:47:18.029 07:55:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:47:18.029 07:55:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:47:18.029 07:55:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:47:18.287 [2024-07-15 07:55:56.702258] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:47:18.287 [2024-07-15 07:55:56.702803] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87122 ] 00:47:18.287 [2024-07-15 07:55:56.880136] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:18.545 [2024-07-15 07:55:57.151457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:47:23.287  Copying: 485/1024 [MB] (485 MBps) Copying: 962/1024 [MB] (477 MBps) Copying: 1024/1024 [MB] (average 482 MBps) 00:47:23.287 00:47:23.287 07:56:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:47:23.287 07:56:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:47:25.184 07:56:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:47:25.184 Validate MD5 checksum, iteration 2 00:47:25.184 07:56:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=886630e2211e0b5a529f597f637537e4 00:47:25.184 07:56:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 886630e2211e0b5a529f597f637537e4 != \8\8\6\6\3\0\e\2\2\1\1\e\0\b\5\a\5\2\9\f\5\9\7\f\6\3\7\5\3\7\e\4 ]] 00:47:25.184 07:56:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:47:25.184 07:56:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:47:25.184 07:56:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:47:25.184 07:56:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:47:25.184 07:56:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:47:25.184 07:56:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:47:25.184 07:56:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:47:25.184 07:56:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:47:25.184 07:56:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:47:25.184 [2024-07-15 07:56:03.768521] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:47:25.184 [2024-07-15 07:56:03.769009] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87195 ] 00:47:25.442 [2024-07-15 07:56:03.950254] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:25.699 [2024-07-15 07:56:04.244496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:47:31.824  Copying: 475/1024 [MB] (475 MBps) Copying: 956/1024 [MB] (481 MBps) Copying: 1024/1024 [MB] (average 478 MBps) 00:47:31.824 00:47:31.824 07:56:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:47:31.824 07:56:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:47:33.732 07:56:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:47:33.732 07:56:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=dc3b4d6274abec43503420efb1f907b4 00:47:33.732 07:56:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ dc3b4d6274abec43503420efb1f907b4 != \d\c\3\b\4\d\6\2\7\4\a\b\e\c\4\3\5\0\3\4\2\0\e\f\b\1\f\9\0\7\b\4 ]] 00:47:33.732 07:56:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:47:33.732 07:56:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:47:33.732 07:56:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:47:33.732 07:56:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 87044 ]] 00:47:33.732 07:56:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 87044 00:47:33.732 07:56:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:47:33.732 07:56:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:47:33.732 07:56:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:47:33.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:33.732 07:56:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:47:33.732 07:56:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:47:33.732 07:56:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=87279 00:47:33.732 07:56:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:47:33.733 07:56:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 87279 00:47:33.733 07:56:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 87279 ']' 00:47:33.733 07:56:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:33.733 07:56:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:33.733 07:56:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:33.733 07:56:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:33.733 07:56:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:47:33.733 07:56:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:47:33.991 [2024-07-15 07:56:12.383742] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:47:33.991 [2024-07-15 07:56:12.383955] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87279 ] 00:47:33.991 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 828: 87044 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:47:33.991 [2024-07-15 07:56:12.567637] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:34.557 [2024-07-15 07:56:12.888170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:35.489 [2024-07-15 07:56:13.863655] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:47:35.489 [2024-07-15 07:56:13.863762] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:47:35.489 [2024-07-15 07:56:14.014261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:35.489 [2024-07-15 07:56:14.014346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:47:35.489 [2024-07-15 07:56:14.014393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:47:35.489 [2024-07-15 07:56:14.014405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:35.489 [2024-07-15 07:56:14.014527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:35.489 [2024-07-15 07:56:14.014548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:47:35.489 [2024-07-15 07:56:14.014562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.091 ms 00:47:35.489 [2024-07-15 07:56:14.014574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:35.489 [2024-07-15 07:56:14.014625] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:47:35.489 [2024-07-15 07:56:14.015714] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:47:35.489 [2024-07-15 07:56:14.015774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:35.489 [2024-07-15 07:56:14.015789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:47:35.489 [2024-07-15 07:56:14.015801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.156 ms 00:47:35.489 [2024-07-15 07:56:14.015812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:35.489 [2024-07-15 07:56:14.016330] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:47:35.489 [2024-07-15 07:56:14.037829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:35.489 [2024-07-15 07:56:14.037893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:47:35.489 [2024-07-15 07:56:14.037930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.499 ms 00:47:35.489 [2024-07-15 07:56:14.037965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:35.489 [2024-07-15 07:56:14.049234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:35.489 [2024-07-15 07:56:14.049278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:47:35.489 [2024-07-15 07:56:14.049312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:47:35.489 [2024-07-15 07:56:14.049322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:35.489 [2024-07-15 07:56:14.049915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:35.489 [2024-07-15 07:56:14.049938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:47:35.489 [2024-07-15 07:56:14.049959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.469 ms 00:47:35.489 [2024-07-15 07:56:14.049971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:35.489 [2024-07-15 07:56:14.050053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:35.489 [2024-07-15 07:56:14.050073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:47:35.489 [2024-07-15 07:56:14.050088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:47:35.489 [2024-07-15 07:56:14.050099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:35.489 [2024-07-15 07:56:14.050148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:35.489 [2024-07-15 07:56:14.050165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:47:35.489 [2024-07-15 07:56:14.050177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:47:35.489 [2024-07-15 07:56:14.050195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:35.489 [2024-07-15 07:56:14.050242] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:47:35.489 [2024-07-15 07:56:14.053887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:35.489 [2024-07-15 07:56:14.053925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:47:35.489 [2024-07-15 07:56:14.053956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.664 ms 00:47:35.489 [2024-07-15 07:56:14.053967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:35.489 [2024-07-15 07:56:14.054007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:35.489 [2024-07-15 07:56:14.054023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:47:35.489 [2024-07-15 07:56:14.054036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:47:35.489 [2024-07-15 07:56:14.054047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:35.489 [2024-07-15 07:56:14.054094] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:47:35.489 [2024-07-15 07:56:14.054128] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:47:35.489 [2024-07-15 07:56:14.054174] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:47:35.489 [2024-07-15 07:56:14.054205] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:47:35.489 [2024-07-15 07:56:14.054310] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:47:35.490 [2024-07-15 07:56:14.054326] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:47:35.490 [2024-07-15 07:56:14.054341] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:47:35.490 [2024-07-15 07:56:14.054356] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:47:35.490 [2024-07-15 07:56:14.054370] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:47:35.490 [2024-07-15 07:56:14.054384] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:47:35.490 [2024-07-15 07:56:14.054395] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:47:35.490 [2024-07-15 07:56:14.054412] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:47:35.490 [2024-07-15 07:56:14.054423] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:47:35.490 [2024-07-15 07:56:14.054435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:35.490 [2024-07-15 07:56:14.054445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:47:35.490 [2024-07-15 07:56:14.054461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.343 ms 00:47:35.490 [2024-07-15 07:56:14.054509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:35.490 [2024-07-15 07:56:14.054613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:35.490 [2024-07-15 07:56:14.054644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:47:35.490 [2024-07-15 07:56:14.054656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.075 ms 00:47:35.490 [2024-07-15 07:56:14.054668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:35.490 [2024-07-15 07:56:14.054799] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:47:35.490 [2024-07-15 07:56:14.054840] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:47:35.490 [2024-07-15 07:56:14.054852] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:47:35.490 [2024-07-15 07:56:14.054864] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:35.490 [2024-07-15 07:56:14.054920] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:47:35.490 [2024-07-15 07:56:14.054932] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:47:35.490 [2024-07-15 07:56:14.054942] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:47:35.490 [2024-07-15 07:56:14.054969] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:47:35.490 [2024-07-15 07:56:14.054981] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:47:35.490 [2024-07-15 07:56:14.054992] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:35.490 [2024-07-15 07:56:14.055003] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:47:35.490 [2024-07-15 07:56:14.055029] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:47:35.490 [2024-07-15 07:56:14.055055] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:35.490 [2024-07-15 07:56:14.055070] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:47:35.490 [2024-07-15 07:56:14.055082] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:47:35.490 [2024-07-15 07:56:14.055094] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:35.490 [2024-07-15 07:56:14.055105] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:47:35.490 [2024-07-15 07:56:14.055117] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:47:35.490 [2024-07-15 07:56:14.055128] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:35.490 [2024-07-15 07:56:14.055140] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:47:35.490 [2024-07-15 07:56:14.055151] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:47:35.490 [2024-07-15 07:56:14.055162] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:47:35.490 [2024-07-15 07:56:14.055174] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:47:35.490 [2024-07-15 07:56:14.055186] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:47:35.490 [2024-07-15 07:56:14.055197] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:47:35.490 [2024-07-15 07:56:14.055219] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:47:35.490 [2024-07-15 07:56:14.055241] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:47:35.490 [2024-07-15 07:56:14.055252] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:47:35.490 [2024-07-15 07:56:14.055263] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:47:35.490 [2024-07-15 07:56:14.055274] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:47:35.490 [2024-07-15 07:56:14.055295] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:47:35.490 [2024-07-15 07:56:14.055306] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:47:35.490 [2024-07-15 07:56:14.055318] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:47:35.490 [2024-07-15 07:56:14.055329] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:35.490 [2024-07-15 07:56:14.055340] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:47:35.490 [2024-07-15 07:56:14.055352] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:47:35.490 [2024-07-15 07:56:14.055363] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:35.490 [2024-07-15 07:56:14.055374] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:47:35.490 [2024-07-15 07:56:14.055385] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:47:35.490 [2024-07-15 07:56:14.055427] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:35.490 [2024-07-15 07:56:14.055438] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:47:35.490 [2024-07-15 07:56:14.055449] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:47:35.490 [2024-07-15 07:56:14.055460] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:35.490 [2024-07-15 07:56:14.055471] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:47:35.490 [2024-07-15 07:56:14.055484] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:47:35.490 [2024-07-15 07:56:14.055496] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:47:35.490 [2024-07-15 07:56:14.055509] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:35.490 [2024-07-15 07:56:14.055522] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:47:35.490 [2024-07-15 07:56:14.055533] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:47:35.490 [2024-07-15 07:56:14.055573] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:47:35.490 [2024-07-15 07:56:14.055585] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:47:35.490 [2024-07-15 07:56:14.055596] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:47:35.490 [2024-07-15 07:56:14.055608] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:47:35.490 [2024-07-15 07:56:14.055621] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:47:35.490 [2024-07-15 07:56:14.055642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:47:35.490 [2024-07-15 07:56:14.055655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:47:35.490 [2024-07-15 07:56:14.055667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:47:35.490 [2024-07-15 07:56:14.055678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:47:35.490 [2024-07-15 07:56:14.055689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:47:35.490 [2024-07-15 07:56:14.055701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:47:35.490 [2024-07-15 07:56:14.055712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:47:35.490 [2024-07-15 07:56:14.055724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:47:35.490 [2024-07-15 07:56:14.055735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:47:35.490 [2024-07-15 07:56:14.055746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:47:35.490 [2024-07-15 07:56:14.055758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:47:35.490 [2024-07-15 07:56:14.055770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:47:35.490 [2024-07-15 07:56:14.055781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:47:35.490 [2024-07-15 07:56:14.055792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:47:35.490 [2024-07-15 07:56:14.055804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:47:35.490 [2024-07-15 07:56:14.055815] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:47:35.490 [2024-07-15 07:56:14.055829] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:47:35.490 [2024-07-15 07:56:14.055841] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:47:35.490 [2024-07-15 07:56:14.055853] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:47:35.490 [2024-07-15 07:56:14.055864] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:47:35.490 [2024-07-15 07:56:14.055876] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:47:35.490 [2024-07-15 07:56:14.055888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:35.490 [2024-07-15 07:56:14.055899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:47:35.490 [2024-07-15 07:56:14.055920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.151 ms 00:47:35.490 [2024-07-15 07:56:14.055932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:35.490 [2024-07-15 07:56:14.095573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:35.490 [2024-07-15 07:56:14.095653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:47:35.490 [2024-07-15 07:56:14.095692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.557 ms 00:47:35.490 [2024-07-15 07:56:14.095704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:35.490 [2024-07-15 07:56:14.095799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:35.490 [2024-07-15 07:56:14.095815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:47:35.490 [2024-07-15 07:56:14.095827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:47:35.490 [2024-07-15 07:56:14.095846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:35.756 [2024-07-15 07:56:14.141037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:35.757 [2024-07-15 07:56:14.141114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:47:35.757 [2024-07-15 07:56:14.141153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.080 ms 00:47:35.757 [2024-07-15 07:56:14.141166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:35.757 [2024-07-15 07:56:14.141268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:35.757 [2024-07-15 07:56:14.141291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:47:35.757 [2024-07-15 07:56:14.141305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:47:35.757 [2024-07-15 07:56:14.141316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:35.757 [2024-07-15 07:56:14.141533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:35.757 [2024-07-15 07:56:14.141553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:47:35.757 [2024-07-15 07:56:14.141566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.124 ms 00:47:35.757 [2024-07-15 07:56:14.141579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:35.757 [2024-07-15 07:56:14.141653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:35.757 [2024-07-15 07:56:14.141672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:47:35.757 [2024-07-15 07:56:14.141690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:47:35.757 [2024-07-15 07:56:14.141702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:35.757 [2024-07-15 07:56:14.165018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:35.757 [2024-07-15 07:56:14.165112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:47:35.757 [2024-07-15 07:56:14.165167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.281 ms 00:47:35.757 [2024-07-15 07:56:14.165180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:35.757 [2024-07-15 07:56:14.165401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:35.757 [2024-07-15 07:56:14.165437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:47:35.757 [2024-07-15 07:56:14.165468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:47:35.757 [2024-07-15 07:56:14.165479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:35.757 [2024-07-15 07:56:14.217438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:35.757 [2024-07-15 07:56:14.217543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:47:35.757 [2024-07-15 07:56:14.217584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 51.870 ms 00:47:35.757 [2024-07-15 07:56:14.217598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:35.757 [2024-07-15 07:56:14.230696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:35.757 [2024-07-15 07:56:14.230743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:47:35.757 [2024-07-15 07:56:14.230779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.786 ms 00:47:35.757 [2024-07-15 07:56:14.230792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:35.757 [2024-07-15 07:56:14.324890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:35.757 [2024-07-15 07:56:14.324996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:47:35.757 [2024-07-15 07:56:14.325037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 93.942 ms 00:47:35.757 [2024-07-15 07:56:14.325050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:35.757 [2024-07-15 07:56:14.325366] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:47:35.757 [2024-07-15 07:56:14.325630] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:47:35.757 [2024-07-15 07:56:14.325831] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:47:35.757 [2024-07-15 07:56:14.326069] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:47:35.757 [2024-07-15 07:56:14.326091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:35.757 [2024-07-15 07:56:14.326105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:47:35.757 [2024-07-15 07:56:14.326119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.927 ms 00:47:35.757 [2024-07-15 07:56:14.326132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:35.757 [2024-07-15 07:56:14.326249] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:47:35.757 [2024-07-15 07:56:14.326272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:35.757 [2024-07-15 07:56:14.326284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:47:35.757 [2024-07-15 07:56:14.326312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:47:35.757 [2024-07-15 07:56:14.326323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:35.757 [2024-07-15 07:56:14.345753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:35.757 [2024-07-15 07:56:14.345815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:47:35.757 [2024-07-15 07:56:14.345851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.394 ms 00:47:35.757 [2024-07-15 07:56:14.345870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:35.757 [2024-07-15 07:56:14.357584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:35.757 [2024-07-15 07:56:14.357657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:47:35.757 [2024-07-15 07:56:14.357708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:47:35.757 [2024-07-15 07:56:14.357721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:35.757 [2024-07-15 07:56:14.358216] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:47:36.348 [2024-07-15 07:56:14.948550] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:47:36.348 [2024-07-15 07:56:14.948798] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:47:36.916 [2024-07-15 07:56:15.517829] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:47:36.916 [2024-07-15 07:56:15.518013] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:47:36.916 [2024-07-15 07:56:15.518082] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:47:36.916 [2024-07-15 07:56:15.518117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:36.916 [2024-07-15 07:56:15.518130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:47:36.916 [2024-07-15 07:56:15.518149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1160.246 ms 00:47:36.916 [2024-07-15 07:56:15.518160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:36.916 [2024-07-15 07:56:15.518207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:36.916 [2024-07-15 07:56:15.518222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:47:36.916 [2024-07-15 07:56:15.518235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:47:36.916 [2024-07-15 07:56:15.518246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:37.174 [2024-07-15 07:56:15.533106] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:47:37.174 [2024-07-15 07:56:15.533313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:37.174 [2024-07-15 07:56:15.533339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:47:37.174 [2024-07-15 07:56:15.533356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.044 ms 00:47:37.174 [2024-07-15 07:56:15.533368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:37.174 [2024-07-15 07:56:15.534273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:37.174 [2024-07-15 07:56:15.534326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:47:37.174 [2024-07-15 07:56:15.534342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.699 ms 00:47:37.174 [2024-07-15 07:56:15.534353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:37.174 [2024-07-15 07:56:15.536733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:37.174 [2024-07-15 07:56:15.536767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:47:37.174 [2024-07-15 07:56:15.536797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.345 ms 00:47:37.174 [2024-07-15 07:56:15.536808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:37.174 [2024-07-15 07:56:15.536859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:37.174 [2024-07-15 07:56:15.536874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:47:37.174 [2024-07-15 07:56:15.536886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:47:37.174 [2024-07-15 07:56:15.536897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:37.174 [2024-07-15 07:56:15.537043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:37.174 [2024-07-15 07:56:15.537060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:47:37.174 [2024-07-15 07:56:15.537077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:47:37.174 [2024-07-15 07:56:15.537088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:37.174 [2024-07-15 07:56:15.537119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:37.174 [2024-07-15 07:56:15.537133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:47:37.174 [2024-07-15 07:56:15.537150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:47:37.174 [2024-07-15 07:56:15.537161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:37.174 [2024-07-15 07:56:15.537207] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:47:37.174 [2024-07-15 07:56:15.537224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:37.174 [2024-07-15 07:56:15.537235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:47:37.174 [2024-07-15 07:56:15.537247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:47:37.174 [2024-07-15 07:56:15.537263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:37.174 [2024-07-15 07:56:15.537328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:37.174 [2024-07-15 07:56:15.537343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:47:37.174 [2024-07-15 07:56:15.537354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:47:37.174 [2024-07-15 07:56:15.537364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:37.174 [2024-07-15 07:56:15.539650] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1524.643 ms, result 0 00:47:37.174 [2024-07-15 07:56:15.554358] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:37.174 [2024-07-15 07:56:15.570366] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:47:37.174 [2024-07-15 07:56:15.580866] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:47:37.174 Validate MD5 checksum, iteration 1 00:47:37.174 07:56:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:37.174 07:56:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:47:37.174 07:56:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:47:37.174 07:56:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:47:37.174 07:56:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:47:37.174 07:56:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:47:37.174 07:56:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:47:37.174 07:56:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:47:37.174 07:56:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:47:37.174 07:56:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:47:37.174 07:56:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:47:37.174 07:56:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:47:37.174 07:56:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:47:37.174 07:56:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:47:37.174 07:56:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:47:37.174 [2024-07-15 07:56:15.729719] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:47:37.174 [2024-07-15 07:56:15.729926] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87318 ] 00:47:37.432 [2024-07-15 07:56:15.911555] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:37.691 [2024-07-15 07:56:16.228629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:47:43.891  Copying: 465/1024 [MB] (465 MBps) Copying: 935/1024 [MB] (470 MBps) Copying: 1024/1024 [MB] (average 466 MBps) 00:47:43.891 00:47:43.891 07:56:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:47:43.891 07:56:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:47:45.837 07:56:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:47:45.837 Validate MD5 checksum, iteration 2 00:47:45.837 07:56:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=886630e2211e0b5a529f597f637537e4 00:47:45.837 07:56:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 886630e2211e0b5a529f597f637537e4 != \8\8\6\6\3\0\e\2\2\1\1\e\0\b\5\a\5\2\9\f\5\9\7\f\6\3\7\5\3\7\e\4 ]] 00:47:45.837 07:56:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:47:45.837 07:56:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:47:45.837 07:56:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:47:45.837 07:56:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:47:45.837 07:56:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:47:45.837 07:56:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:47:45.837 07:56:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:47:45.837 07:56:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:47:45.837 07:56:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:47:45.837 [2024-07-15 07:56:24.135999] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:47:45.837 [2024-07-15 07:56:24.136644] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87404 ] 00:47:45.837 [2024-07-15 07:56:24.319502] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:46.095 [2024-07-15 07:56:24.678670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:47:50.560  Copying: 427/1024 [MB] (427 MBps) Copying: 893/1024 [MB] (466 MBps) Copying: 1024/1024 [MB] (average 449 MBps) 00:47:50.560 00:47:50.560 07:56:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:47:50.560 07:56:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:47:53.135 07:56:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:47:53.135 07:56:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=dc3b4d6274abec43503420efb1f907b4 00:47:53.135 07:56:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ dc3b4d6274abec43503420efb1f907b4 != \d\c\3\b\4\d\6\2\7\4\a\b\e\c\4\3\5\0\3\4\2\0\e\f\b\1\f\9\0\7\b\4 ]] 00:47:53.135 07:56:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:47:53.135 07:56:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:47:53.135 07:56:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:47:53.135 07:56:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:47:53.135 07:56:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:47:53.135 07:56:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:47:53.135 07:56:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:47:53.135 07:56:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:47:53.135 07:56:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:47:53.135 07:56:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:47:53.135 07:56:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 87279 ]] 00:47:53.135 07:56:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 87279 00:47:53.135 07:56:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 87279 ']' 00:47:53.135 07:56:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 87279 00:47:53.135 07:56:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:47:53.135 07:56:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:53.135 07:56:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87279 00:47:53.135 killing process with pid 87279 00:47:53.135 07:56:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:47:53.135 07:56:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:47:53.135 07:56:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87279' 00:47:53.135 07:56:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 87279 00:47:53.135 07:56:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 87279 00:47:54.071 [2024-07-15 07:56:32.517853] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:47:54.071 [2024-07-15 07:56:32.538182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:54.071 [2024-07-15 07:56:32.538259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:47:54.071 [2024-07-15 07:56:32.538299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:47:54.071 [2024-07-15 07:56:32.538320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:54.071 [2024-07-15 07:56:32.538360] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:47:54.071 [2024-07-15 07:56:32.542995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:54.071 [2024-07-15 07:56:32.543046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:47:54.071 [2024-07-15 07:56:32.543067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.612 ms 00:47:54.071 [2024-07-15 07:56:32.543079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:54.071 [2024-07-15 07:56:32.543391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:54.071 [2024-07-15 07:56:32.543426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:47:54.071 [2024-07-15 07:56:32.543447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.279 ms 00:47:54.071 [2024-07-15 07:56:32.543459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:54.071 [2024-07-15 07:56:32.544888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:54.071 [2024-07-15 07:56:32.544931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:47:54.071 [2024-07-15 07:56:32.544964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.374 ms 00:47:54.071 [2024-07-15 07:56:32.544991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:54.071 [2024-07-15 07:56:32.546249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:54.071 [2024-07-15 07:56:32.546279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:47:54.071 [2024-07-15 07:56:32.546311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.185 ms 00:47:54.071 [2024-07-15 07:56:32.546331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:54.071 [2024-07-15 07:56:32.559648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:54.071 [2024-07-15 07:56:32.559717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:47:54.071 [2024-07-15 07:56:32.559753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.255 ms 00:47:54.071 [2024-07-15 07:56:32.559766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:54.071 [2024-07-15 07:56:32.566557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:54.071 [2024-07-15 07:56:32.566615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:47:54.071 [2024-07-15 07:56:32.566659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.726 ms 00:47:54.071 [2024-07-15 07:56:32.566672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:54.071 [2024-07-15 07:56:32.566793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:54.071 [2024-07-15 07:56:32.566823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:47:54.071 [2024-07-15 07:56:32.566837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.074 ms 00:47:54.071 [2024-07-15 07:56:32.566851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:54.071 [2024-07-15 07:56:32.578327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:54.071 [2024-07-15 07:56:32.578364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:47:54.071 [2024-07-15 07:56:32.578396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.453 ms 00:47:54.071 [2024-07-15 07:56:32.578407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:54.071 [2024-07-15 07:56:32.589864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:54.071 [2024-07-15 07:56:32.589899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:47:54.071 [2024-07-15 07:56:32.589930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.416 ms 00:47:54.071 [2024-07-15 07:56:32.589940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:54.071 [2024-07-15 07:56:32.601228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:54.071 [2024-07-15 07:56:32.601290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:47:54.071 [2024-07-15 07:56:32.601306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.248 ms 00:47:54.071 [2024-07-15 07:56:32.601317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:54.071 [2024-07-15 07:56:32.612851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:54.071 [2024-07-15 07:56:32.612888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:47:54.071 [2024-07-15 07:56:32.612918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.436 ms 00:47:54.071 [2024-07-15 07:56:32.612930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:54.071 [2024-07-15 07:56:32.612970] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:47:54.071 [2024-07-15 07:56:32.612995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:47:54.071 [2024-07-15 07:56:32.613010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:47:54.071 [2024-07-15 07:56:32.613022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:47:54.071 [2024-07-15 07:56:32.613034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:47:54.071 [2024-07-15 07:56:32.613047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:47:54.071 [2024-07-15 07:56:32.613059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:47:54.071 [2024-07-15 07:56:32.613070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:47:54.071 [2024-07-15 07:56:32.613081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:47:54.071 [2024-07-15 07:56:32.613092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:47:54.071 [2024-07-15 07:56:32.613104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:47:54.071 [2024-07-15 07:56:32.613116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:47:54.071 [2024-07-15 07:56:32.613128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:47:54.071 [2024-07-15 07:56:32.613140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:47:54.071 [2024-07-15 07:56:32.613152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:47:54.071 [2024-07-15 07:56:32.613163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:47:54.071 [2024-07-15 07:56:32.613175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:47:54.071 [2024-07-15 07:56:32.613187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:47:54.071 [2024-07-15 07:56:32.613198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:47:54.071 [2024-07-15 07:56:32.613214] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:47:54.071 [2024-07-15 07:56:32.613246] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 0d34625b-7442-4507-94cc-90222daa87ce 00:47:54.071 [2024-07-15 07:56:32.613264] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:47:54.071 [2024-07-15 07:56:32.613280] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:47:54.071 [2024-07-15 07:56:32.613295] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:47:54.071 [2024-07-15 07:56:32.613306] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:47:54.071 [2024-07-15 07:56:32.613317] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:47:54.071 [2024-07-15 07:56:32.613330] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:47:54.071 [2024-07-15 07:56:32.613340] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:47:54.071 [2024-07-15 07:56:32.613350] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:47:54.071 [2024-07-15 07:56:32.613360] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:47:54.071 [2024-07-15 07:56:32.613372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:54.071 [2024-07-15 07:56:32.613384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:47:54.071 [2024-07-15 07:56:32.613396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.405 ms 00:47:54.071 [2024-07-15 07:56:32.613417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:54.071 [2024-07-15 07:56:32.630759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:54.071 [2024-07-15 07:56:32.630807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:47:54.071 [2024-07-15 07:56:32.630847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.248 ms 00:47:54.071 [2024-07-15 07:56:32.630860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:54.071 [2024-07-15 07:56:32.631484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:54.071 [2024-07-15 07:56:32.631521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:47:54.071 [2024-07-15 07:56:32.631554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.548 ms 00:47:54.071 [2024-07-15 07:56:32.631565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:54.330 [2024-07-15 07:56:32.687044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:54.330 [2024-07-15 07:56:32.687141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:47:54.330 [2024-07-15 07:56:32.687164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:54.330 [2024-07-15 07:56:32.687178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:54.330 [2024-07-15 07:56:32.687293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:54.330 [2024-07-15 07:56:32.687310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:47:54.330 [2024-07-15 07:56:32.687333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:54.330 [2024-07-15 07:56:32.687346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:54.330 [2024-07-15 07:56:32.687559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:54.330 [2024-07-15 07:56:32.687584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:47:54.330 [2024-07-15 07:56:32.687599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:54.330 [2024-07-15 07:56:32.687610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:54.330 [2024-07-15 07:56:32.687638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:54.330 [2024-07-15 07:56:32.687661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:47:54.330 [2024-07-15 07:56:32.687674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:54.330 [2024-07-15 07:56:32.687692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:54.330 [2024-07-15 07:56:32.812061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:54.330 [2024-07-15 07:56:32.812150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:47:54.330 [2024-07-15 07:56:32.812174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:54.330 [2024-07-15 07:56:32.812202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:54.330 [2024-07-15 07:56:32.904070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:54.330 [2024-07-15 07:56:32.904145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:47:54.330 [2024-07-15 07:56:32.904184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:54.330 [2024-07-15 07:56:32.904197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:54.330 [2024-07-15 07:56:32.904345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:54.330 [2024-07-15 07:56:32.904366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:47:54.330 [2024-07-15 07:56:32.904380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:54.330 [2024-07-15 07:56:32.904393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:54.330 [2024-07-15 07:56:32.904456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:54.330 [2024-07-15 07:56:32.904686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:47:54.330 [2024-07-15 07:56:32.904747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:54.330 [2024-07-15 07:56:32.904790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:54.330 [2024-07-15 07:56:32.905009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:54.330 [2024-07-15 07:56:32.905074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:47:54.330 [2024-07-15 07:56:32.905184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:54.330 [2024-07-15 07:56:32.905208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:54.330 [2024-07-15 07:56:32.905274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:54.330 [2024-07-15 07:56:32.905294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:47:54.330 [2024-07-15 07:56:32.905307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:54.330 [2024-07-15 07:56:32.905320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:54.330 [2024-07-15 07:56:32.905374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:54.330 [2024-07-15 07:56:32.905398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:47:54.330 [2024-07-15 07:56:32.905411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:54.330 [2024-07-15 07:56:32.905424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:54.330 [2024-07-15 07:56:32.905506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:54.330 [2024-07-15 07:56:32.905526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:47:54.330 [2024-07-15 07:56:32.905540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:54.330 [2024-07-15 07:56:32.905552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:54.330 [2024-07-15 07:56:32.905742] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 367.535 ms, result 0 00:47:55.718 07:56:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:47:55.718 07:56:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:47:55.718 07:56:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:47:55.718 07:56:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:47:55.718 07:56:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:47:55.718 07:56:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:47:55.718 07:56:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:47:55.718 Remove shared memory files 00:47:55.718 07:56:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:47:55.718 07:56:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:47:55.718 07:56:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:47:55.718 07:56:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid87044 00:47:55.718 07:56:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:47:55.718 07:56:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:47:55.718 ************************************ 00:47:55.718 END TEST ftl_upgrade_shutdown 00:47:55.718 ************************************ 00:47:55.718 00:47:55.718 real 1m42.444s 00:47:55.718 user 2m23.585s 00:47:55.718 sys 0m26.655s 00:47:55.718 07:56:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:55.718 07:56:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:47:55.718 Process with pid 79523 is not found 00:47:55.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:55.718 07:56:34 ftl -- common/autotest_common.sh@1142 -- # return 0 00:47:55.718 07:56:34 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:47:55.718 07:56:34 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:47:55.718 07:56:34 ftl -- ftl/ftl.sh@14 -- # killprocess 79523 00:47:55.718 07:56:34 ftl -- common/autotest_common.sh@948 -- # '[' -z 79523 ']' 00:47:55.718 07:56:34 ftl -- common/autotest_common.sh@952 -- # kill -0 79523 00:47:55.718 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (79523) - No such process 00:47:55.718 07:56:34 ftl -- common/autotest_common.sh@975 -- # echo 'Process with pid 79523 is not found' 00:47:55.718 07:56:34 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:47:55.718 07:56:34 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=87554 00:47:55.718 07:56:34 ftl -- ftl/ftl.sh@20 -- # waitforlisten 87554 00:47:55.718 07:56:34 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:47:55.718 07:56:34 ftl -- common/autotest_common.sh@829 -- # '[' -z 87554 ']' 00:47:55.718 07:56:34 ftl -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:55.718 07:56:34 ftl -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:55.718 07:56:34 ftl -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:55.718 07:56:34 ftl -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:55.718 07:56:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:47:55.976 [2024-07-15 07:56:34.458576] Starting SPDK v24.09-pre git sha1 9c8eb396d / DPDK 24.03.0 initialization... 00:47:55.976 [2024-07-15 07:56:34.459048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87554 ] 00:47:56.234 [2024-07-15 07:56:34.634788] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:56.492 [2024-07-15 07:56:34.907406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:57.427 07:56:35 ftl -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:57.427 07:56:35 ftl -- common/autotest_common.sh@862 -- # return 0 00:47:57.427 07:56:35 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:47:57.684 nvme0n1 00:47:57.684 07:56:36 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:47:57.684 07:56:36 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:47:57.684 07:56:36 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:47:57.942 07:56:36 ftl -- ftl/common.sh@28 -- # stores=6279c36d-591c-4bf9-afdd-3bb099b25747 00:47:57.942 07:56:36 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:47:57.942 07:56:36 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6279c36d-591c-4bf9-afdd-3bb099b25747 00:47:58.200 07:56:36 ftl -- ftl/ftl.sh@23 -- # killprocess 87554 00:47:58.200 07:56:36 ftl -- common/autotest_common.sh@948 -- # '[' -z 87554 ']' 00:47:58.200 07:56:36 ftl -- common/autotest_common.sh@952 -- # kill -0 87554 00:47:58.200 07:56:36 ftl -- common/autotest_common.sh@953 -- # uname 00:47:58.200 07:56:36 ftl -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:58.200 07:56:36 ftl -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87554 00:47:58.200 07:56:36 ftl -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:47:58.200 07:56:36 ftl -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:47:58.200 07:56:36 ftl -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87554' 00:47:58.200 killing process with pid 87554 00:47:58.200 07:56:36 ftl -- common/autotest_common.sh@967 -- # kill 87554 00:47:58.200 07:56:36 ftl -- common/autotest_common.sh@972 -- # wait 87554 00:48:00.728 07:56:39 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:48:01.052 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:48:01.052 Waiting for block devices as requested 00:48:01.052 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:48:01.052 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:48:01.310 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:48:01.310 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:48:06.571 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:48:06.571 07:56:44 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:48:06.571 Remove shared memory files 00:48:06.571 07:56:44 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:48:06.571 07:56:44 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:48:06.571 07:56:44 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:48:06.571 07:56:44 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:48:06.571 07:56:44 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:48:06.571 07:56:44 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:48:06.571 ************************************ 00:48:06.571 END TEST ftl 00:48:06.571 ************************************ 00:48:06.571 00:48:06.571 real 12m19.946s 00:48:06.571 user 15m20.647s 00:48:06.571 sys 1m40.841s 00:48:06.571 07:56:44 ftl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:06.571 07:56:44 ftl -- common/autotest_common.sh@10 -- # set +x 00:48:06.571 07:56:45 -- common/autotest_common.sh@1142 -- # return 0 00:48:06.571 07:56:45 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:48:06.571 07:56:45 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:48:06.571 07:56:45 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:48:06.571 07:56:45 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:48:06.571 07:56:45 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:48:06.571 07:56:45 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:48:06.571 07:56:45 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:48:06.571 07:56:45 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:48:06.571 07:56:45 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:48:06.571 07:56:45 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:48:06.571 07:56:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:48:06.571 07:56:45 -- common/autotest_common.sh@10 -- # set +x 00:48:06.571 07:56:45 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:48:06.571 07:56:45 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:48:06.571 07:56:45 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:48:06.571 07:56:45 -- common/autotest_common.sh@10 -- # set +x 00:48:07.945 INFO: APP EXITING 00:48:07.945 INFO: killing all VMs 00:48:07.945 INFO: killing vhost app 00:48:07.946 INFO: EXIT DONE 00:48:08.203 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:48:08.770 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:48:08.770 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:48:08.770 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:48:08.770 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:48:09.336 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:48:09.594 Cleaning 00:48:09.594 Removing: /var/run/dpdk/spdk0/config 00:48:09.594 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:48:09.594 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:48:09.594 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:48:09.594 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:48:09.594 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:48:09.594 Removing: /var/run/dpdk/spdk0/hugepage_info 00:48:09.594 Removing: /var/run/dpdk/spdk0 00:48:09.594 Removing: /var/run/dpdk/spdk_pid61852 00:48:09.594 Removing: /var/run/dpdk/spdk_pid62085 00:48:09.594 Removing: /var/run/dpdk/spdk_pid62311 00:48:09.594 Removing: /var/run/dpdk/spdk_pid62421 00:48:09.594 Removing: /var/run/dpdk/spdk_pid62477 00:48:09.594 Removing: /var/run/dpdk/spdk_pid62605 00:48:09.594 Removing: /var/run/dpdk/spdk_pid62634 00:48:09.594 Removing: /var/run/dpdk/spdk_pid62820 00:48:09.594 Removing: /var/run/dpdk/spdk_pid62931 00:48:09.594 Removing: /var/run/dpdk/spdk_pid63030 00:48:09.594 Removing: /var/run/dpdk/spdk_pid63144 00:48:09.594 Removing: /var/run/dpdk/spdk_pid63250 00:48:09.594 Removing: /var/run/dpdk/spdk_pid63295 00:48:09.594 Removing: /var/run/dpdk/spdk_pid63337 00:48:09.594 Removing: /var/run/dpdk/spdk_pid63407 00:48:09.594 Removing: /var/run/dpdk/spdk_pid63524 00:48:09.594 Removing: /var/run/dpdk/spdk_pid63978 00:48:09.594 Removing: /var/run/dpdk/spdk_pid64059 00:48:09.594 Removing: /var/run/dpdk/spdk_pid64138 00:48:09.594 Removing: /var/run/dpdk/spdk_pid64160 00:48:09.594 Removing: /var/run/dpdk/spdk_pid64319 00:48:09.594 Removing: /var/run/dpdk/spdk_pid64335 00:48:09.594 Removing: /var/run/dpdk/spdk_pid64494 00:48:09.594 Removing: /var/run/dpdk/spdk_pid64516 00:48:09.594 Removing: /var/run/dpdk/spdk_pid64585 00:48:09.594 Removing: /var/run/dpdk/spdk_pid64609 00:48:09.594 Removing: /var/run/dpdk/spdk_pid64673 00:48:09.594 Removing: /var/run/dpdk/spdk_pid64702 00:48:09.594 Removing: /var/run/dpdk/spdk_pid64889 00:48:09.594 Removing: /var/run/dpdk/spdk_pid64931 00:48:09.594 Removing: /var/run/dpdk/spdk_pid65012 00:48:09.594 Removing: /var/run/dpdk/spdk_pid65099 00:48:09.594 Removing: /var/run/dpdk/spdk_pid65135 00:48:09.594 Removing: /var/run/dpdk/spdk_pid65213 00:48:09.594 Removing: /var/run/dpdk/spdk_pid65264 00:48:09.594 Removing: /var/run/dpdk/spdk_pid65312 00:48:09.594 Removing: /var/run/dpdk/spdk_pid65364 00:48:09.594 Removing: /var/run/dpdk/spdk_pid65411 00:48:09.594 Removing: /var/run/dpdk/spdk_pid65457 00:48:09.594 Removing: /var/run/dpdk/spdk_pid65509 00:48:09.594 Removing: /var/run/dpdk/spdk_pid65556 00:48:09.594 Removing: /var/run/dpdk/spdk_pid65607 00:48:09.594 Removing: /var/run/dpdk/spdk_pid65653 00:48:09.594 Removing: /var/run/dpdk/spdk_pid65701 00:48:09.594 Removing: /var/run/dpdk/spdk_pid65753 00:48:09.594 Removing: /var/run/dpdk/spdk_pid65800 00:48:09.594 Removing: /var/run/dpdk/spdk_pid65846 00:48:09.594 Removing: /var/run/dpdk/spdk_pid65898 00:48:09.594 Removing: /var/run/dpdk/spdk_pid65945 00:48:09.594 Removing: /var/run/dpdk/spdk_pid65997 00:48:09.594 Removing: /var/run/dpdk/spdk_pid66046 00:48:09.594 Removing: /var/run/dpdk/spdk_pid66096 00:48:09.594 Removing: /var/run/dpdk/spdk_pid66148 00:48:09.852 Removing: /var/run/dpdk/spdk_pid66196 00:48:09.852 Removing: /var/run/dpdk/spdk_pid66283 00:48:09.852 Removing: /var/run/dpdk/spdk_pid66405 00:48:09.852 Removing: /var/run/dpdk/spdk_pid66572 00:48:09.852 Removing: /var/run/dpdk/spdk_pid66678 00:48:09.852 Removing: /var/run/dpdk/spdk_pid66720 00:48:09.852 Removing: /var/run/dpdk/spdk_pid67192 00:48:09.852 Removing: /var/run/dpdk/spdk_pid67296 00:48:09.852 Removing: /var/run/dpdk/spdk_pid67418 00:48:09.852 Removing: /var/run/dpdk/spdk_pid67477 00:48:09.852 Removing: /var/run/dpdk/spdk_pid67508 00:48:09.852 Removing: /var/run/dpdk/spdk_pid67584 00:48:09.852 Removing: /var/run/dpdk/spdk_pid68227 00:48:09.852 Removing: /var/run/dpdk/spdk_pid68275 00:48:09.852 Removing: /var/run/dpdk/spdk_pid68789 00:48:09.852 Removing: /var/run/dpdk/spdk_pid68894 00:48:09.852 Removing: /var/run/dpdk/spdk_pid69021 00:48:09.852 Removing: /var/run/dpdk/spdk_pid69080 00:48:09.852 Removing: /var/run/dpdk/spdk_pid69111 00:48:09.852 Removing: /var/run/dpdk/spdk_pid69142 00:48:09.852 Removing: /var/run/dpdk/spdk_pid71012 00:48:09.852 Removing: /var/run/dpdk/spdk_pid71156 00:48:09.852 Removing: /var/run/dpdk/spdk_pid71167 00:48:09.852 Removing: /var/run/dpdk/spdk_pid71183 00:48:09.852 Removing: /var/run/dpdk/spdk_pid71223 00:48:09.852 Removing: /var/run/dpdk/spdk_pid71227 00:48:09.852 Removing: /var/run/dpdk/spdk_pid71239 00:48:09.852 Removing: /var/run/dpdk/spdk_pid71284 00:48:09.852 Removing: /var/run/dpdk/spdk_pid71288 00:48:09.852 Removing: /var/run/dpdk/spdk_pid71300 00:48:09.852 Removing: /var/run/dpdk/spdk_pid71345 00:48:09.852 Removing: /var/run/dpdk/spdk_pid71349 00:48:09.852 Removing: /var/run/dpdk/spdk_pid71361 00:48:09.852 Removing: /var/run/dpdk/spdk_pid72706 00:48:09.852 Removing: /var/run/dpdk/spdk_pid72816 00:48:09.852 Removing: /var/run/dpdk/spdk_pid74217 00:48:09.852 Removing: /var/run/dpdk/spdk_pid75546 00:48:09.852 Removing: /var/run/dpdk/spdk_pid75696 00:48:09.852 Removing: /var/run/dpdk/spdk_pid75835 00:48:09.852 Removing: /var/run/dpdk/spdk_pid75972 00:48:09.852 Removing: /var/run/dpdk/spdk_pid76132 00:48:09.852 Removing: /var/run/dpdk/spdk_pid76217 00:48:09.852 Removing: /var/run/dpdk/spdk_pid76366 00:48:09.852 Removing: /var/run/dpdk/spdk_pid76739 00:48:09.852 Removing: /var/run/dpdk/spdk_pid76781 00:48:09.852 Removing: /var/run/dpdk/spdk_pid77272 00:48:09.852 Removing: /var/run/dpdk/spdk_pid77461 00:48:09.852 Removing: /var/run/dpdk/spdk_pid77563 00:48:09.852 Removing: /var/run/dpdk/spdk_pid77679 00:48:09.852 Removing: /var/run/dpdk/spdk_pid77740 00:48:09.852 Removing: /var/run/dpdk/spdk_pid77771 00:48:09.852 Removing: /var/run/dpdk/spdk_pid78057 00:48:09.852 Removing: /var/run/dpdk/spdk_pid78123 00:48:09.852 Removing: /var/run/dpdk/spdk_pid78203 00:48:09.852 Removing: /var/run/dpdk/spdk_pid78600 00:48:09.852 Removing: /var/run/dpdk/spdk_pid78747 00:48:09.852 Removing: /var/run/dpdk/spdk_pid79523 00:48:09.852 Removing: /var/run/dpdk/spdk_pid79675 00:48:09.852 Removing: /var/run/dpdk/spdk_pid79885 00:48:09.852 Removing: /var/run/dpdk/spdk_pid79999 00:48:09.852 Removing: /var/run/dpdk/spdk_pid80364 00:48:09.852 Removing: /var/run/dpdk/spdk_pid80644 00:48:09.852 Removing: /var/run/dpdk/spdk_pid81012 00:48:09.852 Removing: /var/run/dpdk/spdk_pid81227 00:48:09.852 Removing: /var/run/dpdk/spdk_pid81362 00:48:09.852 Removing: /var/run/dpdk/spdk_pid81437 00:48:09.852 Removing: /var/run/dpdk/spdk_pid81575 00:48:09.852 Removing: /var/run/dpdk/spdk_pid81613 00:48:09.852 Removing: /var/run/dpdk/spdk_pid81688 00:48:09.852 Removing: /var/run/dpdk/spdk_pid81898 00:48:09.852 Removing: /var/run/dpdk/spdk_pid82146 00:48:09.852 Removing: /var/run/dpdk/spdk_pid82569 00:48:09.852 Removing: /var/run/dpdk/spdk_pid83029 00:48:09.852 Removing: /var/run/dpdk/spdk_pid83473 00:48:09.852 Removing: /var/run/dpdk/spdk_pid83987 00:48:09.852 Removing: /var/run/dpdk/spdk_pid84132 00:48:09.852 Removing: /var/run/dpdk/spdk_pid84248 00:48:09.852 Removing: /var/run/dpdk/spdk_pid84923 00:48:09.852 Removing: /var/run/dpdk/spdk_pid85004 00:48:09.852 Removing: /var/run/dpdk/spdk_pid85477 00:48:09.852 Removing: /var/run/dpdk/spdk_pid85911 00:48:09.852 Removing: /var/run/dpdk/spdk_pid86423 00:48:09.852 Removing: /var/run/dpdk/spdk_pid86551 00:48:09.852 Removing: /var/run/dpdk/spdk_pid86604 00:48:09.852 Removing: /var/run/dpdk/spdk_pid86674 00:48:09.852 Removing: /var/run/dpdk/spdk_pid86741 00:48:09.852 Removing: /var/run/dpdk/spdk_pid86817 00:48:09.852 Removing: /var/run/dpdk/spdk_pid87044 00:48:10.110 Removing: /var/run/dpdk/spdk_pid87122 00:48:10.110 Removing: /var/run/dpdk/spdk_pid87195 00:48:10.110 Removing: /var/run/dpdk/spdk_pid87279 00:48:10.110 Removing: /var/run/dpdk/spdk_pid87318 00:48:10.110 Removing: /var/run/dpdk/spdk_pid87404 00:48:10.110 Removing: /var/run/dpdk/spdk_pid87554 00:48:10.110 Clean 00:48:10.110 07:56:48 -- common/autotest_common.sh@1451 -- # return 0 00:48:10.110 07:56:48 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:48:10.110 07:56:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:48:10.110 07:56:48 -- common/autotest_common.sh@10 -- # set +x 00:48:10.110 07:56:48 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:48:10.110 07:56:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:48:10.110 07:56:48 -- common/autotest_common.sh@10 -- # set +x 00:48:10.110 07:56:48 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:48:10.110 07:56:48 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:48:10.110 07:56:48 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:48:10.110 07:56:48 -- spdk/autotest.sh@391 -- # hash lcov 00:48:10.110 07:56:48 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:48:10.110 07:56:48 -- spdk/autotest.sh@393 -- # hostname 00:48:10.110 07:56:48 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:48:10.368 geninfo: WARNING: invalid characters removed from testname! 00:48:36.898 07:57:15 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:48:41.082 07:57:19 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:48:43.612 07:57:22 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:48:46.886 07:57:24 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:48:49.415 07:57:27 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:48:51.943 07:57:30 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:48:55.225 07:57:33 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:48:55.225 07:57:33 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:48:55.225 07:57:33 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:48:55.225 07:57:33 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:55.225 07:57:33 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:55.225 07:57:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:55.225 07:57:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:55.225 07:57:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:55.225 07:57:33 -- paths/export.sh@5 -- $ export PATH 00:48:55.225 07:57:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:55.225 07:57:33 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:48:55.225 07:57:33 -- common/autobuild_common.sh@444 -- $ date +%s 00:48:55.225 07:57:33 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721030253.XXXXXX 00:48:55.225 07:57:33 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721030253.hnb9aF 00:48:55.225 07:57:33 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:48:55.225 07:57:33 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:48:55.225 07:57:33 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:48:55.225 07:57:33 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:48:55.225 07:57:33 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:48:55.225 07:57:33 -- common/autobuild_common.sh@460 -- $ get_config_params 00:48:55.225 07:57:33 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:48:55.225 07:57:33 -- common/autotest_common.sh@10 -- $ set +x 00:48:55.225 07:57:33 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:48:55.225 07:57:33 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:48:55.225 07:57:33 -- pm/common@17 -- $ local monitor 00:48:55.225 07:57:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:48:55.225 07:57:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:48:55.225 07:57:33 -- pm/common@25 -- $ sleep 1 00:48:55.225 07:57:33 -- pm/common@21 -- $ date +%s 00:48:55.225 07:57:33 -- pm/common@21 -- $ date +%s 00:48:55.225 07:57:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721030253 00:48:55.225 07:57:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721030253 00:48:55.225 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721030253_collect-vmstat.pm.log 00:48:55.225 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721030253_collect-cpu-load.pm.log 00:48:55.789 07:57:34 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:48:55.789 07:57:34 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:48:55.789 07:57:34 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:48:55.789 07:57:34 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:48:55.789 07:57:34 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:48:55.789 07:57:34 -- spdk/autopackage.sh@19 -- $ timing_finish 00:48:55.789 07:57:34 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:48:55.789 07:57:34 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:48:55.789 07:57:34 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:48:55.789 07:57:34 -- spdk/autopackage.sh@20 -- $ exit 0 00:48:55.789 07:57:34 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:48:55.789 07:57:34 -- pm/common@29 -- $ signal_monitor_resources TERM 00:48:55.789 07:57:34 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:48:55.789 07:57:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:48:55.789 07:57:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:48:55.789 07:57:34 -- pm/common@44 -- $ pid=89255 00:48:55.789 07:57:34 -- pm/common@50 -- $ kill -TERM 89255 00:48:55.789 07:57:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:48:55.789 07:57:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:48:55.789 07:57:34 -- pm/common@44 -- $ pid=89257 00:48:55.789 07:57:34 -- pm/common@50 -- $ kill -TERM 89257 00:48:55.789 + [[ -n 5206 ]] 00:48:55.789 + sudo kill 5206 00:48:56.729 [Pipeline] } 00:48:56.746 [Pipeline] // timeout 00:48:56.750 [Pipeline] } 00:48:56.766 [Pipeline] // stage 00:48:56.770 [Pipeline] } 00:48:56.786 [Pipeline] // catchError 00:48:56.793 [Pipeline] stage 00:48:56.795 [Pipeline] { (Stop VM) 00:48:56.807 [Pipeline] sh 00:48:57.082 + vagrant halt 00:49:00.409 ==> default: Halting domain... 00:49:07.001 [Pipeline] sh 00:49:07.278 + vagrant destroy -f 00:49:10.556 ==> default: Removing domain... 00:49:11.131 [Pipeline] sh 00:49:11.408 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:49:11.416 [Pipeline] } 00:49:11.437 [Pipeline] // stage 00:49:11.442 [Pipeline] } 00:49:11.461 [Pipeline] // dir 00:49:11.466 [Pipeline] } 00:49:11.484 [Pipeline] // wrap 00:49:11.491 [Pipeline] } 00:49:11.507 [Pipeline] // catchError 00:49:11.516 [Pipeline] stage 00:49:11.518 [Pipeline] { (Epilogue) 00:49:11.533 [Pipeline] sh 00:49:11.813 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:49:18.380 [Pipeline] catchError 00:49:18.381 [Pipeline] { 00:49:18.395 [Pipeline] sh 00:49:18.672 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:49:18.930 Artifacts sizes are good 00:49:18.938 [Pipeline] } 00:49:18.956 [Pipeline] // catchError 00:49:18.967 [Pipeline] archiveArtifacts 00:49:18.974 Archiving artifacts 00:49:19.114 [Pipeline] cleanWs 00:49:19.124 [WS-CLEANUP] Deleting project workspace... 00:49:19.124 [WS-CLEANUP] Deferred wipeout is used... 00:49:19.129 [WS-CLEANUP] done 00:49:19.131 [Pipeline] } 00:49:19.148 [Pipeline] // stage 00:49:19.153 [Pipeline] } 00:49:19.172 [Pipeline] // node 00:49:19.178 [Pipeline] End of Pipeline 00:49:19.220 Finished: SUCCESS